LOG IN
SIGN UP
Tech Job Finder - Find Software, Technology Sales and Product Manager Jobs.
Sign In
OR continue with e-mail and password
E-mail address
Password
Don't have an account?
Reset password
Join Tech Job Finder
OR continue with e-mail and password
E-mail address
First name
Last name
Username
Password
Confirm Password
How did you hear about us?
By signing up, you agree to our Terms & Conditions and Privacy Policy.

AI Training Optimization Engineer

at Advanced Micro Devices

Back to all C/C++ jobs
A
Industry not specified

AI Training Optimization Engineer

at Advanced Micro Devices

Mid LevelNo visa sponsorshipC/C++/C#

Posted a day ago

No clicks

Compensation
Not specified

Currency: Not specified

City
Not specified
Country
Not specified

As part of AMD’s Training Optimization Team, you will help customers train AI models seamlessly and efficiently on AMD GPUs, identify gaps in the training ecosystem, and optimize kernels using HIP, CUDA, and Triton. You will prototype frontier kernel techniques and contribute to kernel agents to accelerate kernel iteration, while collaborating across internal teams to push training performance on large-scale systems. The role involves diagnosing bottlenecks, improving framework integration with ROCm, and driving upstream improvements with open-source maintainers. You should be comfortable working with customers and across GPU library and runtime teams to achieve peak performance.

WHAT YOU DO AT AMD CHANGES EVERYTHING At AMD, our mission is to build great products that accelerate next-generation computing experiences—from AI and data centers, to PCs, gaming and embedded systems. Grounded in a culture of innovation and collaboration, we believe real progress comes from bold ideas, human ingenuity and a shared passion to create something extraordinary. When you join AMD, you’ll discover the real differentiator is our culture. We push the limits of innovation to solve the world’s most important challenges—striving for execution excellence, while being direct, humble, collaborative, and inclusive of diverse perspectives. Join us as we shape the future of AI and beyond. Together, we advance your career. THE ROLE: As part of AMD’s Training Optimization Team, you will help customers train AI models seamlessly and efficiently on AMD GPUs. You will identify and fill gaps in AMD’s training ecosystem, optimize critical kernels, and leverage frontier techniques to push the limits of training performance on large-scale systems. You will also contribute to the development of kernel agents—tools that accelerate kernel iteration and ultimately assist humans in achieving extreme GPU performance. THE PERSON: You are a strong GPU performance engineer with a solid understanding of algorithms, model architectures, and kernel implementations. You can move fluidly from mathematical concepts to low-level optimization, and you excel in diagnosing real training bottlenecks. You are comfortable working directly with customers and collaborating across internal teams. KEY RESPONSIBILITIES: Support Customers: Ensure smooth training on AMD GPUs by identifying bottlenecks and delivering kernel-level performance improvements. Optimize Hot Operators: Design and optimize kernels using HIP, CUDA, and Triton across real training workloads. Advance Kernel Agents: Improve agent-based tooling to speed up kernel development and help achieve peak performance. Strengthen AMD’s Training Ecosystem: Fill functional gaps, improve framework integration, and enhance ROCm-based training performance. Explore Frontier Kernel Techniques: Prototype next-generation kernels (e.g., sparse attention, linear attention ops). Collaborate Across Teams: Work with GPU library teams, runtime/communication teams, and open-source maintainers to drive upstream improvements. Optimize Distributed Training: Improve performance across multi-GPU and multi-node clusters through better comm/compute overlap and parallelism strategies. PREFERRED EXPERIENCE: Hands-on experience with HIP, CUDA, Triton, and GPU performance tuning. Strong understanding of Transformer models, attention mechanisms, and training algorithms. Experience profiling and optimizing kernels with low-level tools. Familiarity with PyTorch internals, Megatron-LM, DeepSpeed, or other large-training frameworks. Experience debugging or optimizing distributed training (DP/TP/PP/ZeRO). Experience building or optimizing kernel agents, runtime schedulers, or performance-automation tools. Contributions to kernel libraries (CUTLASS, CK), Triton, or ML compiler ecosystems. ACADEMIC CREDENTIALS: Bachelor’s or Master's degree in Computer Science, Computer Engineering, Electrical Engineering, or equivalent #LI-FL1 Benefits offered are described: AMD benefits at a glance. AMD does not accept unsolicited resumes from headhunters, recruitment agencies, or fee-based recruitment services. AMD and its subsidiaries are equal opportunity, inclusive employers and will consider all applicants without regard to age, ancestry, color, marital status, medical condition, mental or physical disability, national origin, race, religion, political and/or third-party affiliation, sex, pregnancy, sexual orientation, gender identity, military or veteran status, or any other characteristic protected by law. We encourage applications from all qualified candidates and will accommodate applicants’ needs under the respective laws throughout all stages of the recruitment and selection process. AMD may use Artificial Intelligence to help screen, assess or select applicants for this position. AMD’s “Responsible AI Policy” is available here. This posting is for an existing vacancy.

AI Training Optimization Engineer

at Advanced Micro Devices

Back to all C/C++ jobs
A
Industry not specified

AI Training Optimization Engineer

at Advanced Micro Devices

Mid LevelNo visa sponsorshipC/C++/C#

Posted a day ago

No clicks

Compensation
Not specified

Currency: Not specified

City
Not specified
Country
Not specified

As part of AMD’s Training Optimization Team, you will help customers train AI models seamlessly and efficiently on AMD GPUs, identify gaps in the training ecosystem, and optimize kernels using HIP, CUDA, and Triton. You will prototype frontier kernel techniques and contribute to kernel agents to accelerate kernel iteration, while collaborating across internal teams to push training performance on large-scale systems. The role involves diagnosing bottlenecks, improving framework integration with ROCm, and driving upstream improvements with open-source maintainers. You should be comfortable working with customers and across GPU library and runtime teams to achieve peak performance.

WHAT YOU DO AT AMD CHANGES EVERYTHING At AMD, our mission is to build great products that accelerate next-generation computing experiences—from AI and data centers, to PCs, gaming and embedded systems. Grounded in a culture of innovation and collaboration, we believe real progress comes from bold ideas, human ingenuity and a shared passion to create something extraordinary. When you join AMD, you’ll discover the real differentiator is our culture. We push the limits of innovation to solve the world’s most important challenges—striving for execution excellence, while being direct, humble, collaborative, and inclusive of diverse perspectives. Join us as we shape the future of AI and beyond. Together, we advance your career. THE ROLE: As part of AMD’s Training Optimization Team, you will help customers train AI models seamlessly and efficiently on AMD GPUs. You will identify and fill gaps in AMD’s training ecosystem, optimize critical kernels, and leverage frontier techniques to push the limits of training performance on large-scale systems. You will also contribute to the development of kernel agents—tools that accelerate kernel iteration and ultimately assist humans in achieving extreme GPU performance. THE PERSON: You are a strong GPU performance engineer with a solid understanding of algorithms, model architectures, and kernel implementations. You can move fluidly from mathematical concepts to low-level optimization, and you excel in diagnosing real training bottlenecks. You are comfortable working directly with customers and collaborating across internal teams. KEY RESPONSIBILITIES: Support Customers: Ensure smooth training on AMD GPUs by identifying bottlenecks and delivering kernel-level performance improvements. Optimize Hot Operators: Design and optimize kernels using HIP, CUDA, and Triton across real training workloads. Advance Kernel Agents: Improve agent-based tooling to speed up kernel development and help achieve peak performance. Strengthen AMD’s Training Ecosystem: Fill functional gaps, improve framework integration, and enhance ROCm-based training performance. Explore Frontier Kernel Techniques: Prototype next-generation kernels (e.g., sparse attention, linear attention ops). Collaborate Across Teams: Work with GPU library teams, runtime/communication teams, and open-source maintainers to drive upstream improvements. Optimize Distributed Training: Improve performance across multi-GPU and multi-node clusters through better comm/compute overlap and parallelism strategies. PREFERRED EXPERIENCE: Hands-on experience with HIP, CUDA, Triton, and GPU performance tuning. Strong understanding of Transformer models, attention mechanisms, and training algorithms. Experience profiling and optimizing kernels with low-level tools. Familiarity with PyTorch internals, Megatron-LM, DeepSpeed, or other large-training frameworks. Experience debugging or optimizing distributed training (DP/TP/PP/ZeRO). Experience building or optimizing kernel agents, runtime schedulers, or performance-automation tools. Contributions to kernel libraries (CUTLASS, CK), Triton, or ML compiler ecosystems. ACADEMIC CREDENTIALS: Bachelor’s or Master's degree in Computer Science, Computer Engineering, Electrical Engineering, or equivalent #LI-FL1 Benefits offered are described: AMD benefits at a glance. AMD does not accept unsolicited resumes from headhunters, recruitment agencies, or fee-based recruitment services. AMD and its subsidiaries are equal opportunity, inclusive employers and will consider all applicants without regard to age, ancestry, color, marital status, medical condition, mental or physical disability, national origin, race, religion, political and/or third-party affiliation, sex, pregnancy, sexual orientation, gender identity, military or veteran status, or any other characteristic protected by law. We encourage applications from all qualified candidates and will accommodate applicants’ needs under the respective laws throughout all stages of the recruitment and selection process. AMD may use Artificial Intelligence to help screen, assess or select applicants for this position. AMD’s “Responsible AI Policy” is available here. This posting is for an existing vacancy.

SIMILAR OPPORTUNITIES

No similar jobs available at the moment.