LOG IN
SIGN UP
Tech Job Finder - Find Software, Technology Sales and Product Manager Jobs.
Sign In
OR continue with e-mail and password
E-mail address
Password
Don't have an account?
Reset password
Join Tech Job Finder
OR continue with e-mail and password
E-mail address
First name
Last name
Username
Password
Confirm Password
How did you hear about us?
By signing up, you agree to our Terms & Conditions and Privacy Policy.

Principal Artificial Intelligence Algorithms Engineer

at Nvidia

Back to all Data Science / AI / ML jobs
N
Industry not specified

Principal Artificial Intelligence Algorithms Engineer

at Nvidia

Tech LeadNo visa sponsorshipData Science/AI/ML

Posted 5 hours ago

No clicks

Compensation
$272,000 – $431,250 USD

Currency: $ (USD)

City
Not specified
Country
United States

Join NVIDIA's Megatron Core and NeMo Framework team to design, implement, and optimize distributed training algorithms and model parallel paradigms for large-scale AI models. You will expand capabilities, define robust APIs, and optimize performance across the full model lifecycle from data preprocessing to deployment. You will collaborate with internal partners and the open-source community to deliver scalable AI tooling and pipelines. This role requires advanced degrees or equivalent experience and a track record of strengthening AI libraries with new innovations.

NVIDIA is looking for engineers for our core AI Frameworks (Megatron Core and NeMo Framework) team to design, develop and optimize diverse real world workloads. Megatron Core and NeMo Framework are open-source, scalable and cloud-native frameworks built for researchers and developers working on Large Language Models (LLM) and Multimodal (MM) foundation model pretraining and post-training. Our GenAI Frameworks provide end-to-end model training, including pretraining, reasoning, alignment, customization, evaluation, deployment and tooling to optimize performance and user experience.

In this critical role, you will expand Megatron Core and NeMo Framework's capabilities, enabling users to develop, train, and optimize models by designing and implementing the latest in distributed training algorithms, model parallel paradigms, model optimizations, defining robust APIs, meticulously analyzing and tuning performance, and expanding our toolkits and libraries to be more comprehensive and coherent. You will collaborate with internal partners, users, and members of the open source community to analyze, design, and implement highly optimized solutions.

What you’ll be doing:

  • Develop algorithms for AI/DL, data analytics, machine learning, or scientific computing

  • Contribute and advance open source Megatron Core and NeMo Framework

  • Solve large-scale, end-to-end AI training and inference challenges, spanning the full model lifecycle from initial orchestration, data pre-processing, running of model training and tuning, to model deployment.

  • Work at the intersection of compter-architecture, libraries, frameworks, AI applications and the entire software stack.

  • Innovate and improve model architectures, distributed training algorithms, and model parallel paradigms.

  • Performance tuning and optimizations, model training and finetuning with mixed precision recipes on next-gen NVIDIA GPU architectures.

  • Research, prototype, and develop robust and scalable AI tools and pipelines.

What we need to see:

  • MS, PhD or equivalent experience in Computer Science, AI, Applied Math, or related fields and 10+ years of industry experience.

  • Experience with AI Frameworks (e.g. PyTorch, JAX), and/or inference and deployment environments (e.g. TRTLLM, vLLM, SGLang).

  • Proficient in Python programming, software design, debugging, performance analysis, test design and documentation.

  • Consistent record of working effectively across multiple engineering initiatives and improving AI libraries with new innovations.

  • Strong understanding of AI/Deep-Learning fundamentals and their practical applications.

Ways to stand out from the crowd:

  • Hands-on experience in large-scale AI training, with a deep understanding of core compute system concepts (such as latency/throughput bottlenecks, pipelining, and multiprocessing) and demonstrated excellence in related performance analysis and tuning.

  • Expertise in distributed computing, model parallelism, and mixed precision training

  • Prior experience with Generative AI techniques applied to LLM and Multi-Modal learning (Text, Image, and Video).

  • Knowledge of GPU/CPU architecture and related numerical software.

  • Contributions to open source deep learning frameworks.

NVIDIA is widely considered to be one of the technology world’s most desirable employers. We have some of the most forward-thinking and hardworking people on the planet working with us. If you're creative and autonomous, we want to hear from you!

Your base salary will be determined based on your location, experience, and the pay of employees in similar positions. The base salary range is 272,000 USD - 431,250 USD.

You will also be eligible for equity and benefits.

Applications for this job will be accepted at least until January 13, 2026.

This posting is for an existing vacancy.

NVIDIA uses AI tools in its recruiting processes.

NVIDIA is committed to fostering a diverse work environment and proud to be an equal opportunity employer. As we highly value diversity in our current and future employees, we do not discriminate (including in our hiring and promotion practices) on the basis of race, religion, color, national origin, gender, gender expression, sexual orientation, age, marital status, veteran status, disability status or any other characteristic protected by law.

Principal Artificial Intelligence Algorithms Engineer

at Nvidia

Back to all Data Science / AI / ML jobs
N
Industry not specified

Principal Artificial Intelligence Algorithms Engineer

at Nvidia

Tech LeadNo visa sponsorshipData Science/AI/ML

Posted 5 hours ago

No clicks

Compensation
$272,000 – $431,250 USD

Currency: $ (USD)

City
Not specified
Country
United States

Join NVIDIA's Megatron Core and NeMo Framework team to design, implement, and optimize distributed training algorithms and model parallel paradigms for large-scale AI models. You will expand capabilities, define robust APIs, and optimize performance across the full model lifecycle from data preprocessing to deployment. You will collaborate with internal partners and the open-source community to deliver scalable AI tooling and pipelines. This role requires advanced degrees or equivalent experience and a track record of strengthening AI libraries with new innovations.

NVIDIA is looking for engineers for our core AI Frameworks (Megatron Core and NeMo Framework) team to design, develop and optimize diverse real world workloads. Megatron Core and NeMo Framework are open-source, scalable and cloud-native frameworks built for researchers and developers working on Large Language Models (LLM) and Multimodal (MM) foundation model pretraining and post-training. Our GenAI Frameworks provide end-to-end model training, including pretraining, reasoning, alignment, customization, evaluation, deployment and tooling to optimize performance and user experience.

In this critical role, you will expand Megatron Core and NeMo Framework's capabilities, enabling users to develop, train, and optimize models by designing and implementing the latest in distributed training algorithms, model parallel paradigms, model optimizations, defining robust APIs, meticulously analyzing and tuning performance, and expanding our toolkits and libraries to be more comprehensive and coherent. You will collaborate with internal partners, users, and members of the open source community to analyze, design, and implement highly optimized solutions.

What you’ll be doing:

  • Develop algorithms for AI/DL, data analytics, machine learning, or scientific computing

  • Contribute and advance open source Megatron Core and NeMo Framework

  • Solve large-scale, end-to-end AI training and inference challenges, spanning the full model lifecycle from initial orchestration, data pre-processing, running of model training and tuning, to model deployment.

  • Work at the intersection of compter-architecture, libraries, frameworks, AI applications and the entire software stack.

  • Innovate and improve model architectures, distributed training algorithms, and model parallel paradigms.

  • Performance tuning and optimizations, model training and finetuning with mixed precision recipes on next-gen NVIDIA GPU architectures.

  • Research, prototype, and develop robust and scalable AI tools and pipelines.

What we need to see:

  • MS, PhD or equivalent experience in Computer Science, AI, Applied Math, or related fields and 10+ years of industry experience.

  • Experience with AI Frameworks (e.g. PyTorch, JAX), and/or inference and deployment environments (e.g. TRTLLM, vLLM, SGLang).

  • Proficient in Python programming, software design, debugging, performance analysis, test design and documentation.

  • Consistent record of working effectively across multiple engineering initiatives and improving AI libraries with new innovations.

  • Strong understanding of AI/Deep-Learning fundamentals and their practical applications.

Ways to stand out from the crowd:

  • Hands-on experience in large-scale AI training, with a deep understanding of core compute system concepts (such as latency/throughput bottlenecks, pipelining, and multiprocessing) and demonstrated excellence in related performance analysis and tuning.

  • Expertise in distributed computing, model parallelism, and mixed precision training

  • Prior experience with Generative AI techniques applied to LLM and Multi-Modal learning (Text, Image, and Video).

  • Knowledge of GPU/CPU architecture and related numerical software.

  • Contributions to open source deep learning frameworks.

NVIDIA is widely considered to be one of the technology world’s most desirable employers. We have some of the most forward-thinking and hardworking people on the planet working with us. If you're creative and autonomous, we want to hear from you!

Your base salary will be determined based on your location, experience, and the pay of employees in similar positions. The base salary range is 272,000 USD - 431,250 USD.

You will also be eligible for equity and benefits.

Applications for this job will be accepted at least until January 13, 2026.

This posting is for an existing vacancy.

NVIDIA uses AI tools in its recruiting processes.

NVIDIA is committed to fostering a diverse work environment and proud to be an equal opportunity employer. As we highly value diversity in our current and future employees, we do not discriminate (including in our hiring and promotion practices) on the basis of race, religion, color, national origin, gender, gender expression, sexual orientation, age, marital status, veteran status, disability status or any other characteristic protected by law.

SIMILAR OPPORTUNITIES

No similar jobs available at the moment.