LOG IN
SIGN UP
Tech Job Finder - Find Software, Technology Sales and Product Manager Jobs.
Sign In
OR continue with e-mail and password
E-mail address
Password
Don't have an account?
Reset password
Join Tech Job Finder
OR continue with e-mail and password
E-mail address
First name
Last name
Username
Password
Confirm Password
How did you hear about us?
By signing up, you agree to our Terms & Conditions and Privacy Policy.

Distributed Inferencing Software Engineer - AI Models

at Advanced Micro Devices

Back to all Data Science / AI / ML jobs
A
Industry not specified

Distributed Inferencing Software Engineer - AI Models

at Advanced Micro Devices

Mid LevelNo visa sponsorshipData Science/AI/ML

Posted 4 hours ago

No clicks

Compensation
Not specified

Currency: Not specified

City
Not specified
Country
Not specified

AMD is seeking a software engineer focused on distributed AI inferencing on AMD GPUs to optimize AI models and benchmarks across multi-GPU and multi-node clusters. You will implement and benchmark distributed workloads, apply parallelization strategies, and collaborate with internal GPU library teams to drive high throughput and low latency. You will contribute to distributed model management, model zoos, monitoring, benchmarking and documentation. A strong background in C++/Python AI development and GPU computing is required.

WHAT YOU DO AT AMD CHANGES EVERYTHING At AMD, our mission is to build great products that accelerate next-generation computing experiences—from AI and data centers, to PCs, gaming and embedded systems. Grounded in a culture of innovation and collaboration, we believe real progress comes from bold ideas, human ingenuity and a shared passion to create something extraordinary. When you join AMD, you’ll discover the real differentiator is our culture. We push the limits of innovation to solve the world’s most important challenges—striving for execution excellence, while being direct, humble, collaborative, and inclusive of diverse perspectives. Join us as we shape the future of AI and beyond. Together, we advance your career. THE ROLE: AMD is looking for a software engineer who is passionate about Distributed Inferencing on AMD GPUs, and improving the performance of key applications and benchmarks. You will be a member of a core team of incredibly talented industry specialists and will work with the very latest hardware and software technology. THE PERSON: Strong technical and analytical skills in C++/Python AI development, solving performance and investigating scalability on multi-GPU, multi-node clusters. Ability to work as part of a team, while also being able to work independently, define goals and scope and lead your own development effort. KEY RESPONSIBILITIES: Enable, benchmark AI models on distributed systems Work in a distributed computing setting to optimize for both scale-up (multi-GPU) / scale-out (multi-node) / scale-across systems Collaborate and interact with internal GPU library teams to analyze and optimize distributed workloads for high throughput/low latency Expertise on parallelization strategies for AI workloads - and application for best performance for each configuration Contribute to distributed model management, model zoos, monitoring, benchmarking and documentation PREFERRED EXPERIENCE: Knowledge of GPU computing (HIP, CUDA, OpenCL) AI framework engineering experience (vLLM, SGLang, Llama.cpp) Understanding of KV cache transfer mechanisms, options (Mooncake, NIXL/RIXL) and Expert Parallelization (DeepEP/MORI/PPLX-Garden) Excellent C/C++/Python programming and software design skills, including debugging, performance analysis, and test design. Experiences to run workloads, especially AI models, on large scale heterogeneous cluster Familiarity with clusters and orchestration software (SLURM, K8s) ACADEMIC CREDENTIALS: Masters or PhD or equivalent experience in Computer Science, Computer Engineering, or related field #LI-JG1 Benefits offered are described: AMD benefits at a glance. AMD does not accept unsolicited resumes from headhunters, recruitment agencies, or fee-based recruitment services. AMD and its subsidiaries are equal opportunity, inclusive employers and will consider all applicants without regard to age, ancestry, color, marital status, medical condition, mental or physical disability, national origin, race, religion, political and/or third-party affiliation, sex, pregnancy, sexual orientation, gender identity, military or veteran status, or any other characteristic protected by law. We encourage applications from all qualified candidates and will accommodate applicants’ needs under the respective laws throughout all stages of the recruitment and selection process. AMD may use Artificial Intelligence to help screen, assess or select applicants for this position. AMD’s “Responsible AI Policy” is available here. This posting is for an existing vacancy.

Distributed Inferencing Software Engineer - AI Models

at Advanced Micro Devices

Back to all Data Science / AI / ML jobs
A
Industry not specified

Distributed Inferencing Software Engineer - AI Models

at Advanced Micro Devices

Mid LevelNo visa sponsorshipData Science/AI/ML

Posted 4 hours ago

No clicks

Compensation
Not specified

Currency: Not specified

City
Not specified
Country
Not specified

AMD is seeking a software engineer focused on distributed AI inferencing on AMD GPUs to optimize AI models and benchmarks across multi-GPU and multi-node clusters. You will implement and benchmark distributed workloads, apply parallelization strategies, and collaborate with internal GPU library teams to drive high throughput and low latency. You will contribute to distributed model management, model zoos, monitoring, benchmarking and documentation. A strong background in C++/Python AI development and GPU computing is required.

WHAT YOU DO AT AMD CHANGES EVERYTHING At AMD, our mission is to build great products that accelerate next-generation computing experiences—from AI and data centers, to PCs, gaming and embedded systems. Grounded in a culture of innovation and collaboration, we believe real progress comes from bold ideas, human ingenuity and a shared passion to create something extraordinary. When you join AMD, you’ll discover the real differentiator is our culture. We push the limits of innovation to solve the world’s most important challenges—striving for execution excellence, while being direct, humble, collaborative, and inclusive of diverse perspectives. Join us as we shape the future of AI and beyond. Together, we advance your career. THE ROLE: AMD is looking for a software engineer who is passionate about Distributed Inferencing on AMD GPUs, and improving the performance of key applications and benchmarks. You will be a member of a core team of incredibly talented industry specialists and will work with the very latest hardware and software technology. THE PERSON: Strong technical and analytical skills in C++/Python AI development, solving performance and investigating scalability on multi-GPU, multi-node clusters. Ability to work as part of a team, while also being able to work independently, define goals and scope and lead your own development effort. KEY RESPONSIBILITIES: Enable, benchmark AI models on distributed systems Work in a distributed computing setting to optimize for both scale-up (multi-GPU) / scale-out (multi-node) / scale-across systems Collaborate and interact with internal GPU library teams to analyze and optimize distributed workloads for high throughput/low latency Expertise on parallelization strategies for AI workloads - and application for best performance for each configuration Contribute to distributed model management, model zoos, monitoring, benchmarking and documentation PREFERRED EXPERIENCE: Knowledge of GPU computing (HIP, CUDA, OpenCL) AI framework engineering experience (vLLM, SGLang, Llama.cpp) Understanding of KV cache transfer mechanisms, options (Mooncake, NIXL/RIXL) and Expert Parallelization (DeepEP/MORI/PPLX-Garden) Excellent C/C++/Python programming and software design skills, including debugging, performance analysis, and test design. Experiences to run workloads, especially AI models, on large scale heterogeneous cluster Familiarity with clusters and orchestration software (SLURM, K8s) ACADEMIC CREDENTIALS: Masters or PhD or equivalent experience in Computer Science, Computer Engineering, or related field #LI-JG1 Benefits offered are described: AMD benefits at a glance. AMD does not accept unsolicited resumes from headhunters, recruitment agencies, or fee-based recruitment services. AMD and its subsidiaries are equal opportunity, inclusive employers and will consider all applicants without regard to age, ancestry, color, marital status, medical condition, mental or physical disability, national origin, race, religion, political and/or third-party affiliation, sex, pregnancy, sexual orientation, gender identity, military or veteran status, or any other characteristic protected by law. We encourage applications from all qualified candidates and will accommodate applicants’ needs under the respective laws throughout all stages of the recruitment and selection process. AMD may use Artificial Intelligence to help screen, assess or select applicants for this position. AMD’s “Responsible AI Policy” is available here. This posting is for an existing vacancy.

SIMILAR OPPORTUNITIES

No similar jobs available at the moment.