LOG IN
SIGN UP
Tech Job Finder - Find Software, Technology Sales and Product Manager Jobs.
Sign In
OR continue with e-mail and password
E-mail address
Password
Don't have an account?
Reset password
Join Tech Job Finder
OR continue with e-mail and password
E-mail address
First name
Last name
Username
Password
Confirm Password
How did you hear about us?
By signing up, you agree to our Terms & Conditions and Privacy Policy.

Software Development Engineer – Distributed Inference

at Advanced Micro Devices

Back to all C/C++ jobs
A
Industry not specified

Software Development Engineer – Distributed Inference

at Advanced Micro Devices

GraduateNo visa sponsorshipC/C++/C#

Posted a day ago

No clicks

Compensation
Not specified

Currency: Not specified

City
Not specified
Country
Not specified

AMD is seeking a Software Development Engineer focused on distributed AI inference on AMD GPUs. You will join a core team to optimize multi-GPU, multi-node AI workloads, benchmark performance, and contribute to scalable model management and tooling. The role involves C++/Python development, performance analysis, benchmarking automation, and collaboration with internal GPU library teams to achieve high throughput and low latency. You will develop parallelization strategies and build real-time dashboards for performance, accuracy, and reliability.

WHAT YOU DO AT AMD CHANGES EVERYTHING At AMD, our mission is to build great products that accelerate next-generation computing experiences—from AI and data centers, to PCs, gaming and embedded systems. Grounded in a culture of innovation and collaboration, we believe real progress comes from bold ideas, human ingenuity and a shared passion to create something extraordinary. When you join AMD, you’ll discover the real differentiator is our culture. We push the limits of innovation to solve the world’s most important challenges—striving for execution excellence, while being direct, humble, collaborative, and inclusive of diverse perspectives. Join us as we shape the future of AI and beyond. Together, we advance your career. THE ROLE: AMD is looking for a software engineer who is passionate about Distributed Inferencing on AMD GPUs and improving the performance of key applications and benchmarks. You will be a member of a core team of incredibly talented industry specialists and will work with the very latest hardware and software technology THE PERSON: We are seeking a software engineer with strong technical expertise in C++/ Python development, solving performance and investigating scalability on multi-GPU, multi-node clusters. He is also passionate about quality assurance, benchmarking, and automation in the AI/ML space. The ideal candidate thrives in both collaborative and independent environments, demonstrates excellent problem-solving skills, and takes ownership in defining goals and delivering impactful solutions. KEY RESPONSIBILITIES: Distributed AI Enablement and Benchmarking: Enable and benchmark AI models on large-scale distributed systems to evaluate performance, accuracy, and scalability. Scalable Systems Optimization: Optimize AI workloads across scale-up (multi-GPU), scale-out (multi-node), and scale-across distributed system configurations. Cross-Team Collaboration: Collaborate closely with internal GPU library teams to analyze and optimize distributed workloads for high throughput and low latency. Parallelization Strategies: Develop and apply optimal parallelization strategies for AI workloads to achieve best-in-class performance across diverse system configurations. Model Infrastructure and Management: Contribute to distributed model management systems, model zoos, monitoring frameworks, benchmarking pipelines, and technical documentation. Performance Monitoring and Visualization: Build and maintain real-time dashboards reporting performance, accuracy, and reliability metrics for internal stakeholders and external users. PREFERRED EXPERIENCE: AI Framework Engineering: Hands-on experience with AI inference or serving frameworks such as vLLM, SGLang, and Llama.cpp. KV Cache and Expert Parallelization: Understanding KV cache transfer mechanisms and technologies (e.g., Mooncake, NIXL/RIXL) and expert parallelization approaches (e.g., DeepEP, MORI, PPLX-Garden). Programming and Software Design: Strong C/C++ and Python skills, with experience in software design, debugging, performance analysis, and test development. Large-Scale Distributed Systems: Experience running AI workloads on large-scale, heterogeneous compute clusters. Cluster and Orchestration Systems: Familiarity with cluster management and orchestration platforms such as SLURM and Kubernetes (K8s). Development Tools and Workflows: Experience with GitHub, Jenkins, or similar CI/CD tools and modern development workflows. ACADEMIC CREDENTIALS: Undergraduate or Master’s or PhD degree in Computer Science, Computer Engineering, or a related field, or equivalent practical experience. #LI-JG1 Benefits offered are described: AMD benefits at a glance. AMD does not accept unsolicited resumes from headhunters, recruitment agencies, or fee-based recruitment services. AMD and its subsidiaries are equal opportunity, inclusive employers and will consider all applicants without regard to age, ancestry, color, marital status, medical condition, mental or physical disability, national origin, race, religion, political and/or third-party affiliation, sex, pregnancy, sexual orientation, gender identity, military or veteran status, or any other characteristic protected by law. We encourage applications from all qualified candidates and will accommodate applicants’ needs under the respective laws throughout all stages of the recruitment and selection process. AMD may use Artificial Intelligence to help screen, assess or select applicants for this position. AMD’s “Responsible AI Policy” is available here. This posting is for an existing vacancy.

Software Development Engineer – Distributed Inference

at Advanced Micro Devices

Back to all C/C++ jobs
A
Industry not specified

Software Development Engineer – Distributed Inference

at Advanced Micro Devices

GraduateNo visa sponsorshipC/C++/C#

Posted a day ago

No clicks

Compensation
Not specified

Currency: Not specified

City
Not specified
Country
Not specified

AMD is seeking a Software Development Engineer focused on distributed AI inference on AMD GPUs. You will join a core team to optimize multi-GPU, multi-node AI workloads, benchmark performance, and contribute to scalable model management and tooling. The role involves C++/Python development, performance analysis, benchmarking automation, and collaboration with internal GPU library teams to achieve high throughput and low latency. You will develop parallelization strategies and build real-time dashboards for performance, accuracy, and reliability.

WHAT YOU DO AT AMD CHANGES EVERYTHING At AMD, our mission is to build great products that accelerate next-generation computing experiences—from AI and data centers, to PCs, gaming and embedded systems. Grounded in a culture of innovation and collaboration, we believe real progress comes from bold ideas, human ingenuity and a shared passion to create something extraordinary. When you join AMD, you’ll discover the real differentiator is our culture. We push the limits of innovation to solve the world’s most important challenges—striving for execution excellence, while being direct, humble, collaborative, and inclusive of diverse perspectives. Join us as we shape the future of AI and beyond. Together, we advance your career. THE ROLE: AMD is looking for a software engineer who is passionate about Distributed Inferencing on AMD GPUs and improving the performance of key applications and benchmarks. You will be a member of a core team of incredibly talented industry specialists and will work with the very latest hardware and software technology THE PERSON: We are seeking a software engineer with strong technical expertise in C++/ Python development, solving performance and investigating scalability on multi-GPU, multi-node clusters. He is also passionate about quality assurance, benchmarking, and automation in the AI/ML space. The ideal candidate thrives in both collaborative and independent environments, demonstrates excellent problem-solving skills, and takes ownership in defining goals and delivering impactful solutions. KEY RESPONSIBILITIES: Distributed AI Enablement and Benchmarking: Enable and benchmark AI models on large-scale distributed systems to evaluate performance, accuracy, and scalability. Scalable Systems Optimization: Optimize AI workloads across scale-up (multi-GPU), scale-out (multi-node), and scale-across distributed system configurations. Cross-Team Collaboration: Collaborate closely with internal GPU library teams to analyze and optimize distributed workloads for high throughput and low latency. Parallelization Strategies: Develop and apply optimal parallelization strategies for AI workloads to achieve best-in-class performance across diverse system configurations. Model Infrastructure and Management: Contribute to distributed model management systems, model zoos, monitoring frameworks, benchmarking pipelines, and technical documentation. Performance Monitoring and Visualization: Build and maintain real-time dashboards reporting performance, accuracy, and reliability metrics for internal stakeholders and external users. PREFERRED EXPERIENCE: AI Framework Engineering: Hands-on experience with AI inference or serving frameworks such as vLLM, SGLang, and Llama.cpp. KV Cache and Expert Parallelization: Understanding KV cache transfer mechanisms and technologies (e.g., Mooncake, NIXL/RIXL) and expert parallelization approaches (e.g., DeepEP, MORI, PPLX-Garden). Programming and Software Design: Strong C/C++ and Python skills, with experience in software design, debugging, performance analysis, and test development. Large-Scale Distributed Systems: Experience running AI workloads on large-scale, heterogeneous compute clusters. Cluster and Orchestration Systems: Familiarity with cluster management and orchestration platforms such as SLURM and Kubernetes (K8s). Development Tools and Workflows: Experience with GitHub, Jenkins, or similar CI/CD tools and modern development workflows. ACADEMIC CREDENTIALS: Undergraduate or Master’s or PhD degree in Computer Science, Computer Engineering, or a related field, or equivalent practical experience. #LI-JG1 Benefits offered are described: AMD benefits at a glance. AMD does not accept unsolicited resumes from headhunters, recruitment agencies, or fee-based recruitment services. AMD and its subsidiaries are equal opportunity, inclusive employers and will consider all applicants without regard to age, ancestry, color, marital status, medical condition, mental or physical disability, national origin, race, religion, political and/or third-party affiliation, sex, pregnancy, sexual orientation, gender identity, military or veteran status, or any other characteristic protected by law. We encourage applications from all qualified candidates and will accommodate applicants’ needs under the respective laws throughout all stages of the recruitment and selection process. AMD may use Artificial Intelligence to help screen, assess or select applicants for this position. AMD’s “Responsible AI Policy” is available here. This posting is for an existing vacancy.

SIMILAR OPPORTUNITIES

No similar jobs available at the moment.