LOG IN
SIGN UP
Tech Job Finder - Find Software, Technology Sales and Product Manager Jobs.
Sign In
OR continue with e-mail and password
E-mail address
Password
Don't have an account?
Reset password
Join Tech Job Finder
OR continue with e-mail and password
E-mail address
First name
Last name
Username
Password
Confirm Password
How did you hear about us?
By signing up, you agree to our Terms & Conditions and Privacy Policy.

Software Development Engineer

at Advanced Micro Devices

Back to all Data Science / AI / ML jobs
A
Industry not specified

Software Development Engineer

at Advanced Micro Devices

GraduateNo visa sponsorshipData Science/AI/ML

Posted 11 hours ago

No clicks

Compensation
Not specified

Currency: Not specified

City
Not specified
Country
Not specified

At AMD, we are seeking a dynamic, upbeat Software Development Engineer to build robust, high-performance software components for AI inference across multi-GPU systems. The role emphasizes full-stack development within AI inference, focusing on model behavior and integration with frameworks, and requires collaboration with internal GPU library teams and open-source maintainers. You will optimize DL/LLM frameworks, implement features for large language models and multimodal architectures, and profile and optimize performance in multi-GPU/multi-node environments. This is an early-career role with production-quality Python/C++ coding, memory-conscious, and performance-focused requirements.

WHAT YOU DO AT AMD CHANGES EVERYTHING At AMD, our mission is to build great products that accelerate next-generation computing experiences—from AI and data centers, to PCs, gaming and embedded systems. Grounded in a culture of innovation and collaboration, we believe real progress comes from bold ideas, human ingenuity and a shared passion to create something extraordinary. When you join AMD, you’ll discover the real differentiator is our culture. We push the limits of innovation to solve the world’s most important challenges—striving for execution excellence, while being direct, humble, collaborative, and inclusive of diverse perspectives. Join us as we shape the future of AI and beyond. Together, we advance your career. THE ROLE: We are looking for a dynamic, upbeat software engineer to join our growing team. Your work will focus on building robust, efficient software components that enable high-performance execution of large language models and multimodal models across multi-GPU systems. You’ll collaborate with internal GPU library teams and open-source maintainers to implement features that improve throughput, latency, and scalability. This role emphasizes full-stack development within AI inference systems, with a strong focus on model behavior and framework integration. THE PERSON: A motivated early-career software engineer with solid foundational skills in Python and/or C++ in Linux environments. The ideal candidate has hands-on experience or strong academic exposure to deep learning systems, understands LLM and multimodal model architectures, and is eager to write production-quality code that balances functionality, correctness, and performance. KEY RESPONSIBILITIES: Deep Learning & LLM Framework Optimization: Experience with optimizing major DL/LLM frameworks (PyTorch, vLLM, SGLang) for AMD GPUs and contribute improvements upstream. Model-Aware Implementation: Build features that interact closely with LLMs and multimodal architectures (e.g., Llama, Qwen-VL, Wan), requiring understanding of attention mechanisms, cross-modal fusion, KV caching, and quantization. Performance-Conscious Coding: Write efficient, scalable code while considering memory usage, concurrency, and bottlenecks in multi-GPU environments. Profiling: Use profiling tools to evaluate the impact of your changes, identify regressions, and validate performance improvements as part of the development cycle. End-to-End Performance Engineering: Perform comprehensive profiling to identify bottlenecks and implement system, memory, and communication optimizations across multi-GPU and multi-node setups. Compiler & Pipeline Acceleration: Leverage compiler technologies and graph compilers to enhance the full deep learning and inference pipeline. Research & Advanced Techniques: Prototype and integrate emerging optimization methods such as speculative decoding and weight-only quantization into production systems. Cross-Team & Open-Source Collaboration: Collaborate with internal GPU library teams and open-source maintainers to align improvements and ensure seamless upstream integration. Software Engineering Excellence: Apply robust engineering practices to deliver maintainable, reliable, and production-quality performance optimizations. MANDATORY EXPERIENCE: Software Engineering Skills: Familiarity in Python. Familiarity with C++ or async programming is a plus. Understanding of LLM or multimodal model concepts: Knowledge of transformer architectures, attention mechanisms, vision-language alignment, and inference pipelines (e.g., image + text input handling). Have theoretical grounding in Transformer/Attention/MoE/KV Cache, and quantization (FP8/FP4). Linux development environment: Comfortable using command-line tools, Git, and standard debugging/profiling utilities. End-to-End LLM Performance Engineering: Experience with profiling and diagnosing compute, memory, and communication bottlenecks across multi-GPU and multi-node environments. Software Engineering Excellence & Community Contribution is a plus: Solid Python/C++ coding skills and experience debugging and testing practices, proven ability to deliver maintainable performance-critical software, and a track record of open-source contributions with strong self-motivation. GPU Kernel Development & Optimization is a plus: Knowlege of high-performance GPU kernels tuning for AMD GPUs using HIP, CUDA, ASM, and tools like CK, CUTLASS, and Triton. Compiler & System-Level Optimization is a plus: Foundational knowledge of LLVM, ROCm, and compiler-driven techniques for improving kernel and system performance. Model Architectures & Optimization Expertise: Experience with multimodal models (e.g., Qwen-VL, Qwen-Image-Edit, Wan) or diffusion-based generative models. Familiarity with techniques like quantization, PagedAttention, continuous batching, or speculative decoding. Development Skills: Exposure to GPU computing (ROCm, CUDA) or performance profiling tools (e.g., PyTorch Profiler). Distributed Systems Experience: Experience with distributed inference for large-scale models (e.g., Tensor Parallel, Pipeline Parallel). ACADEMIC & PREFERRED QUALIFICATIONS: Bachelor’s in Computer Science, Computer Engineering, Electrical Engineering, or a related field. Benefits offered are described: AMD benefits at a glance. AMD does not accept unsolicited resumes from headhunters, recruitment agencies, or fee-based recruitment services. AMD and its subsidiaries are equal opportunity, inclusive employers and will consider all applicants without regard to age, ancestry, color, marital status, medical condition, mental or physical disability, national origin, race, religion, political and/or third-party affiliation, sex, pregnancy, sexual orientation, gender identity, military or veteran status, or any other characteristic protected by law. We encourage applications from all qualified candidates and will accommodate applicants’ needs under the respective laws throughout all stages of the recruitment and selection process. AMD may use Artificial Intelligence to help screen, assess or select applicants for this position. AMD’s “Responsible AI Policy” is available here. This posting is for an existing vacancy.

Software Development Engineer

at Advanced Micro Devices

Back to all Data Science / AI / ML jobs
A
Industry not specified

Software Development Engineer

at Advanced Micro Devices

GraduateNo visa sponsorshipData Science/AI/ML

Posted 11 hours ago

No clicks

Compensation
Not specified

Currency: Not specified

City
Not specified
Country
Not specified

At AMD, we are seeking a dynamic, upbeat Software Development Engineer to build robust, high-performance software components for AI inference across multi-GPU systems. The role emphasizes full-stack development within AI inference, focusing on model behavior and integration with frameworks, and requires collaboration with internal GPU library teams and open-source maintainers. You will optimize DL/LLM frameworks, implement features for large language models and multimodal architectures, and profile and optimize performance in multi-GPU/multi-node environments. This is an early-career role with production-quality Python/C++ coding, memory-conscious, and performance-focused requirements.

WHAT YOU DO AT AMD CHANGES EVERYTHING At AMD, our mission is to build great products that accelerate next-generation computing experiences—from AI and data centers, to PCs, gaming and embedded systems. Grounded in a culture of innovation and collaboration, we believe real progress comes from bold ideas, human ingenuity and a shared passion to create something extraordinary. When you join AMD, you’ll discover the real differentiator is our culture. We push the limits of innovation to solve the world’s most important challenges—striving for execution excellence, while being direct, humble, collaborative, and inclusive of diverse perspectives. Join us as we shape the future of AI and beyond. Together, we advance your career. THE ROLE: We are looking for a dynamic, upbeat software engineer to join our growing team. Your work will focus on building robust, efficient software components that enable high-performance execution of large language models and multimodal models across multi-GPU systems. You’ll collaborate with internal GPU library teams and open-source maintainers to implement features that improve throughput, latency, and scalability. This role emphasizes full-stack development within AI inference systems, with a strong focus on model behavior and framework integration. THE PERSON: A motivated early-career software engineer with solid foundational skills in Python and/or C++ in Linux environments. The ideal candidate has hands-on experience or strong academic exposure to deep learning systems, understands LLM and multimodal model architectures, and is eager to write production-quality code that balances functionality, correctness, and performance. KEY RESPONSIBILITIES: Deep Learning & LLM Framework Optimization: Experience with optimizing major DL/LLM frameworks (PyTorch, vLLM, SGLang) for AMD GPUs and contribute improvements upstream. Model-Aware Implementation: Build features that interact closely with LLMs and multimodal architectures (e.g., Llama, Qwen-VL, Wan), requiring understanding of attention mechanisms, cross-modal fusion, KV caching, and quantization. Performance-Conscious Coding: Write efficient, scalable code while considering memory usage, concurrency, and bottlenecks in multi-GPU environments. Profiling: Use profiling tools to evaluate the impact of your changes, identify regressions, and validate performance improvements as part of the development cycle. End-to-End Performance Engineering: Perform comprehensive profiling to identify bottlenecks and implement system, memory, and communication optimizations across multi-GPU and multi-node setups. Compiler & Pipeline Acceleration: Leverage compiler technologies and graph compilers to enhance the full deep learning and inference pipeline. Research & Advanced Techniques: Prototype and integrate emerging optimization methods such as speculative decoding and weight-only quantization into production systems. Cross-Team & Open-Source Collaboration: Collaborate with internal GPU library teams and open-source maintainers to align improvements and ensure seamless upstream integration. Software Engineering Excellence: Apply robust engineering practices to deliver maintainable, reliable, and production-quality performance optimizations. MANDATORY EXPERIENCE: Software Engineering Skills: Familiarity in Python. Familiarity with C++ or async programming is a plus. Understanding of LLM or multimodal model concepts: Knowledge of transformer architectures, attention mechanisms, vision-language alignment, and inference pipelines (e.g., image + text input handling). Have theoretical grounding in Transformer/Attention/MoE/KV Cache, and quantization (FP8/FP4). Linux development environment: Comfortable using command-line tools, Git, and standard debugging/profiling utilities. End-to-End LLM Performance Engineering: Experience with profiling and diagnosing compute, memory, and communication bottlenecks across multi-GPU and multi-node environments. Software Engineering Excellence & Community Contribution is a plus: Solid Python/C++ coding skills and experience debugging and testing practices, proven ability to deliver maintainable performance-critical software, and a track record of open-source contributions with strong self-motivation. GPU Kernel Development & Optimization is a plus: Knowlege of high-performance GPU kernels tuning for AMD GPUs using HIP, CUDA, ASM, and tools like CK, CUTLASS, and Triton. Compiler & System-Level Optimization is a plus: Foundational knowledge of LLVM, ROCm, and compiler-driven techniques for improving kernel and system performance. Model Architectures & Optimization Expertise: Experience with multimodal models (e.g., Qwen-VL, Qwen-Image-Edit, Wan) or diffusion-based generative models. Familiarity with techniques like quantization, PagedAttention, continuous batching, or speculative decoding. Development Skills: Exposure to GPU computing (ROCm, CUDA) or performance profiling tools (e.g., PyTorch Profiler). Distributed Systems Experience: Experience with distributed inference for large-scale models (e.g., Tensor Parallel, Pipeline Parallel). ACADEMIC & PREFERRED QUALIFICATIONS: Bachelor’s in Computer Science, Computer Engineering, Electrical Engineering, or a related field. Benefits offered are described: AMD benefits at a glance. AMD does not accept unsolicited resumes from headhunters, recruitment agencies, or fee-based recruitment services. AMD and its subsidiaries are equal opportunity, inclusive employers and will consider all applicants without regard to age, ancestry, color, marital status, medical condition, mental or physical disability, national origin, race, religion, political and/or third-party affiliation, sex, pregnancy, sexual orientation, gender identity, military or veteran status, or any other characteristic protected by law. We encourage applications from all qualified candidates and will accommodate applicants’ needs under the respective laws throughout all stages of the recruitment and selection process. AMD may use Artificial Intelligence to help screen, assess or select applicants for this position. AMD’s “Responsible AI Policy” is available here. This posting is for an existing vacancy.

SIMILAR OPPORTUNITIES

No similar jobs available at the moment.