LOG IN
SIGN UP
Tech Job Finder - Find Software, Technology Sales and Product Manager Jobs.
Sign In
OR continue with e-mail and password
E-mail address
Password
Don't have an account?
Reset password
Join Tech Job Finder
OR continue with e-mail and password
E-mail address
First name
Last name
Username
Password
Confirm Password
How did you hear about us?
By signing up, you agree to our Terms & Conditions and Privacy Policy.

Software Engineer-AI/ML, AWS Neuron Inference

at Amazon

Back to all Data Science / AI / ML jobs
A
Industry not specified

Software Engineer-AI/ML, AWS Neuron Inference

at Amazon

Mid LevelNo visa sponsorshipData Science/AI/ML

Posted 6 hours ago

No clicks

Compensation
$143,700 – $194,400 USD

Currency: $ (USD)

City
Seattle
Country
United States

Senior software engineer in the Machine Learning Inference Applications team focused on development and performance optimization of core building blocks for LLM inference, including attention, MLP, quantization, speculative decoding, and mixture of experts. Works closely with chip architects, compiler engineers, and runtime engineers to deliver performance and accuracy on AWS Neuron devices across models. Responsibilities include adapting the latest research in LLM optimization to Neuron chips to maximize performance for both open source and internally developed models, across multiple teams. The role emphasizes collaboration, mentorship, and rapid delivery of optimized inference pipelines.

AWS Neuron is the complete software stack for the AWS Inferentia and Trainium cloud-scale machine
learning accelerators. This role is for a senior software engineer in the Machine Learning Inference Applications team. This role is responsible for development and performance optimization of core building blocks of LLM Inference - Attention, MLP, Quantization, Speculative Decoding, Mixture of Experts, etc.

The team works side by side with chip architects, compiler engineers and runtime engineers to deliver performance and accuracy on Neuron devices across a range of models such as Llama 3.3 70B, 3.1 405B, DBRX, Mixtral, and so on.

Key job responsibilities
Responsibilities of this role include adapting latest research in LLM optimization to Neuron chips to extract best performance from both open source as well as internally developed models. Working across teams and organizations is key.

About the team
Our team is dedicated to supporting new members. We have a broad mix of experience levels and tenures, and we’re building an environment that celebrates knowledge-sharing and mentorship. Our senior members enjoy one-on-one mentoring and thorough, but kind, code reviews. We care about your career growth and strive to assign projects that help our team members develop your engineering expertise so you feel empowered to take on more complex tasks in the future.

Basic Qualifications

- 3+ years of non-internship professional software development experience
- 2+ years of non-internship design or architecture (design patterns, reliability and scaling) of new and existing systems experience
- Experience programming with at least one software programming language
- Fundamentals of Machine learning models, their architecture, training and inference lifecycles along with work experience on some optimizations for improving the model performance.

Preferred Qualifications

- 3+ years of full software development life cycle, including coding standards, code reviews, source control management, build processes, testing, and operations experience
- Bachelor's degree in computer science or equivalent
- Hands-on experience with PyTorch or Jax - preferably involving developing and deploying LLMs in production on GPUs, Neuron, TPU or other AI acceleration hardware.

Amazon is an equal opportunity employer and does not discriminate on the basis of protected veteran status, disability, or other legally protected status.

Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner.

The base salary range for this position is listed below. Your Amazon package will include sign-on payments and restricted stock units (RSUs). Final compensation will be determined based on factors including experience, qualifications, and location. Amazon also offers comprehensive benefits including health insurance (medical, dental, vision, prescription, Basic Life & AD&D insurance and option for Supplemental life plans, EAP, Mental Health Support, Medical Advice Line, Flexible Spending Accounts, Adoption and Surrogacy Reimbursement coverage), 401(k) matching, paid time off, and parental leave. Learn more about our benefits at https://amazon.jobs/en/benefits.



USA, WA, Seattle - 143,700.00 - 194,400.00 USD annually

Software Engineer-AI/ML, AWS Neuron Inference

at Amazon

Back to all Data Science / AI / ML jobs
A
Industry not specified

Software Engineer-AI/ML, AWS Neuron Inference

at Amazon

Mid LevelNo visa sponsorshipData Science/AI/ML

Posted 6 hours ago

No clicks

Compensation
$143,700 – $194,400 USD

Currency: $ (USD)

City
Seattle
Country
United States

Senior software engineer in the Machine Learning Inference Applications team focused on development and performance optimization of core building blocks for LLM inference, including attention, MLP, quantization, speculative decoding, and mixture of experts. Works closely with chip architects, compiler engineers, and runtime engineers to deliver performance and accuracy on AWS Neuron devices across models. Responsibilities include adapting the latest research in LLM optimization to Neuron chips to maximize performance for both open source and internally developed models, across multiple teams. The role emphasizes collaboration, mentorship, and rapid delivery of optimized inference pipelines.

AWS Neuron is the complete software stack for the AWS Inferentia and Trainium cloud-scale machine
learning accelerators. This role is for a senior software engineer in the Machine Learning Inference Applications team. This role is responsible for development and performance optimization of core building blocks of LLM Inference - Attention, MLP, Quantization, Speculative Decoding, Mixture of Experts, etc.

The team works side by side with chip architects, compiler engineers and runtime engineers to deliver performance and accuracy on Neuron devices across a range of models such as Llama 3.3 70B, 3.1 405B, DBRX, Mixtral, and so on.

Key job responsibilities
Responsibilities of this role include adapting latest research in LLM optimization to Neuron chips to extract best performance from both open source as well as internally developed models. Working across teams and organizations is key.

About the team
Our team is dedicated to supporting new members. We have a broad mix of experience levels and tenures, and we’re building an environment that celebrates knowledge-sharing and mentorship. Our senior members enjoy one-on-one mentoring and thorough, but kind, code reviews. We care about your career growth and strive to assign projects that help our team members develop your engineering expertise so you feel empowered to take on more complex tasks in the future.

Basic Qualifications

- 3+ years of non-internship professional software development experience
- 2+ years of non-internship design or architecture (design patterns, reliability and scaling) of new and existing systems experience
- Experience programming with at least one software programming language
- Fundamentals of Machine learning models, their architecture, training and inference lifecycles along with work experience on some optimizations for improving the model performance.

Preferred Qualifications

- 3+ years of full software development life cycle, including coding standards, code reviews, source control management, build processes, testing, and operations experience
- Bachelor's degree in computer science or equivalent
- Hands-on experience with PyTorch or Jax - preferably involving developing and deploying LLMs in production on GPUs, Neuron, TPU or other AI acceleration hardware.

Amazon is an equal opportunity employer and does not discriminate on the basis of protected veteran status, disability, or other legally protected status.

Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner.

The base salary range for this position is listed below. Your Amazon package will include sign-on payments and restricted stock units (RSUs). Final compensation will be determined based on factors including experience, qualifications, and location. Amazon also offers comprehensive benefits including health insurance (medical, dental, vision, prescription, Basic Life & AD&D insurance and option for Supplemental life plans, EAP, Mental Health Support, Medical Advice Line, Flexible Spending Accounts, Adoption and Surrogacy Reimbursement coverage), 401(k) matching, paid time off, and parental leave. Learn more about our benefits at https://amazon.jobs/en/benefits.



USA, WA, Seattle - 143,700.00 - 194,400.00 USD annually

SIMILAR OPPORTUNITIES

No similar jobs available at the moment.