LOG IN
SIGN UP
Tech Job Finder - Find Software, Technology Sales and Product Manager Jobs.
Sign In
OR continue with e-mail and password
E-mail address
Password
Don't have an account?
Reset password
Join Tech Job Finder
OR continue with e-mail and password
E-mail address
First name
Last name
Username
Password
Confirm Password
How did you hear about us?
By signing up, you agree to our Terms & Conditions and Privacy Policy.

Senior MLOps Engineer, GenAI Framework

at Nvidia

Back to all Data Science / AI / ML jobs
N
Industry not specified

Senior MLOps Engineer, GenAI Framework

at Nvidia

Mid LevelNo visa sponsorshipData Science/AI/ML

Posted 10 hours ago

No clicks

Compensation
$152,000 – $241,500 USD

Currency: $ (USD)

City
Not specified
Country
United States

Develop and maintain the CI/CD pipelines and release processes for NVIDIA's Generative AI frameworks, Megatron-LM and NeMo Framework. Implement scalable DevOps solutions to enable the fast-growing team to release software frequently while maintaining high quality and performance. Collaborate with industry-standard tools (Kubernetes, Docker, Slurm, Ansible, GitLab, GitHub Actions, Jenkins, Artifactory, Jira) across hybrid on-premise and cloud environments and assist with cluster operations. Automate recurring tasks to accelerate research and development, and contribute to QA measures, code analysis, and regression testing while working closely with CUDA, cuDNN, cuBLAS, PyTorch teams.

NVIDIA is looking for a dedicated and motivated build and continuous integration (CI/CD) engineer for its GenAI Frameworks (Megatron-LM and NeMo Framework) team. Megatron-LM and NeMo Framework are open-source, scalable and cloud-native frameworks built for researchers and developers working on Large Language Models (LLM), Multimodal (MM), and Video Generation. Megatron-LM and NeMo Framework provide end-to-end model training, including data curation, alignment, customization, evaluation, deployment and tooling to optimize performance and user experience. Building upon the latest DevOps tools, your work will enable GenAI framework software engineers, deep learning algorithm engineers, and research scientists to work efficiently with a wide variety of deep learning algorithms and software stacks as they vigilantly seek out opportunities for performance optimization and continuously deliver high quality software.

Does the idea of pushing the boundaries of innovative research and development excite you? Are you interested in getting exposure to the entire DL SW stack? Then join our technically diverse team of DL algorithm engineers and performance optimization specialists to unlock unprecedented deep learning performance in every domain.

What you’ll be doing:

  • Develop and maintain the continuous integration pipelines and release processes of our Generative AI framework and libraries related to Megatron-LM and NeMo Framework.

  • Implement efficient and scalable DevOps solutions to allow our fast growing team to release software more frequently while maintaining high-quality and maximum performance.

  • Work with industry standard tools (Kubernetes, Docker, Slurm, Ansible, GitLab, GitHub Actions, Jenkins, Artifactory, Jira) in hybrid on-premise and cloud environments.

  • Assist with cluster operations and system administration (managing: servers, team accounts, clusters).

  • Accelerate research and development cycles by automating recurring tasks such as accuracy and performance regression detection.

  • Developing new quality control measures, e.g. code analysis, backwards compatibility, and regression testing, while employing and advancing best-practices.

  • Work closely with DL frameworks and libraries (CUDA, cuDNN, cuBLAS, and PyTorch) teams and with other engineering teams within NVIDIA that provide software, testing, and release related infrastructure.

What we need to see:

  • BS or MS degree in Computer Science, Computer Architecture or related technical field (or equivalent experience) and 3+ years of industry experience in DevOps and infrastructure engineering.

  • Strong system level programming in languages like Python and shell scripting.

  • Experience with build/release systems and CI/CD with solutions like Gitlab, Github, Jenkins etc.

  • Experience with Linux system administration.

  • Experience with containerization and cluster management technologies like Docker and Kubernetes.

  • Experience in build tools, including Make, Cmake.

  • A strong background in source code management (SCM) solutions such as GitLab, GitHub, Perforce, etc.

  • Well-versed problem-solving and debugging skills.

  • Great teammate who can collaborate and influence others in a dynamic environment.

  • Excellent interpersonal and written communication skills.

Ways to stand out from the crowd:

  • Proven-track record with GPU accelerated systems at scale.

  • Well-versed in DL frameworks such as PyTorch, Jax, or TensorFlow.

  • Expertise in cluster and cloud compute technologies, e.g.: SLURM, Lustre, k8s

  • Software and hardware Benchmarking on high-performance computing systems.

Your base salary will be determined based on your location, experience, and the pay of employees in similar positions. The base salary range is 152,000 USD - 241,500 USD.

You will also be eligible for equity and benefits.

Applications for this job will be accepted at least until February 23, 2026.

This posting is for an existing vacancy.

NVIDIA uses AI tools in its recruiting processes.

NVIDIA is committed to fostering a diverse work environment and proud to be an equal opportunity employer. As we highly value diversity in our current and future employees, we do not discriminate (including in our hiring and promotion practices) on the basis of race, religion, color, national origin, gender, gender expression, sexual orientation, age, marital status, veteran status, disability status or any other characteristic protected by law.

Senior MLOps Engineer, GenAI Framework

at Nvidia

Back to all Data Science / AI / ML jobs
N
Industry not specified

Senior MLOps Engineer, GenAI Framework

at Nvidia

Mid LevelNo visa sponsorshipData Science/AI/ML

Posted 10 hours ago

No clicks

Compensation
$152,000 – $241,500 USD

Currency: $ (USD)

City
Not specified
Country
United States

Develop and maintain the CI/CD pipelines and release processes for NVIDIA's Generative AI frameworks, Megatron-LM and NeMo Framework. Implement scalable DevOps solutions to enable the fast-growing team to release software frequently while maintaining high quality and performance. Collaborate with industry-standard tools (Kubernetes, Docker, Slurm, Ansible, GitLab, GitHub Actions, Jenkins, Artifactory, Jira) across hybrid on-premise and cloud environments and assist with cluster operations. Automate recurring tasks to accelerate research and development, and contribute to QA measures, code analysis, and regression testing while working closely with CUDA, cuDNN, cuBLAS, PyTorch teams.

NVIDIA is looking for a dedicated and motivated build and continuous integration (CI/CD) engineer for its GenAI Frameworks (Megatron-LM and NeMo Framework) team. Megatron-LM and NeMo Framework are open-source, scalable and cloud-native frameworks built for researchers and developers working on Large Language Models (LLM), Multimodal (MM), and Video Generation. Megatron-LM and NeMo Framework provide end-to-end model training, including data curation, alignment, customization, evaluation, deployment and tooling to optimize performance and user experience. Building upon the latest DevOps tools, your work will enable GenAI framework software engineers, deep learning algorithm engineers, and research scientists to work efficiently with a wide variety of deep learning algorithms and software stacks as they vigilantly seek out opportunities for performance optimization and continuously deliver high quality software.

Does the idea of pushing the boundaries of innovative research and development excite you? Are you interested in getting exposure to the entire DL SW stack? Then join our technically diverse team of DL algorithm engineers and performance optimization specialists to unlock unprecedented deep learning performance in every domain.

What you’ll be doing:

  • Develop and maintain the continuous integration pipelines and release processes of our Generative AI framework and libraries related to Megatron-LM and NeMo Framework.

  • Implement efficient and scalable DevOps solutions to allow our fast growing team to release software more frequently while maintaining high-quality and maximum performance.

  • Work with industry standard tools (Kubernetes, Docker, Slurm, Ansible, GitLab, GitHub Actions, Jenkins, Artifactory, Jira) in hybrid on-premise and cloud environments.

  • Assist with cluster operations and system administration (managing: servers, team accounts, clusters).

  • Accelerate research and development cycles by automating recurring tasks such as accuracy and performance regression detection.

  • Developing new quality control measures, e.g. code analysis, backwards compatibility, and regression testing, while employing and advancing best-practices.

  • Work closely with DL frameworks and libraries (CUDA, cuDNN, cuBLAS, and PyTorch) teams and with other engineering teams within NVIDIA that provide software, testing, and release related infrastructure.

What we need to see:

  • BS or MS degree in Computer Science, Computer Architecture or related technical field (or equivalent experience) and 3+ years of industry experience in DevOps and infrastructure engineering.

  • Strong system level programming in languages like Python and shell scripting.

  • Experience with build/release systems and CI/CD with solutions like Gitlab, Github, Jenkins etc.

  • Experience with Linux system administration.

  • Experience with containerization and cluster management technologies like Docker and Kubernetes.

  • Experience in build tools, including Make, Cmake.

  • A strong background in source code management (SCM) solutions such as GitLab, GitHub, Perforce, etc.

  • Well-versed problem-solving and debugging skills.

  • Great teammate who can collaborate and influence others in a dynamic environment.

  • Excellent interpersonal and written communication skills.

Ways to stand out from the crowd:

  • Proven-track record with GPU accelerated systems at scale.

  • Well-versed in DL frameworks such as PyTorch, Jax, or TensorFlow.

  • Expertise in cluster and cloud compute technologies, e.g.: SLURM, Lustre, k8s

  • Software and hardware Benchmarking on high-performance computing systems.

Your base salary will be determined based on your location, experience, and the pay of employees in similar positions. The base salary range is 152,000 USD - 241,500 USD.

You will also be eligible for equity and benefits.

Applications for this job will be accepted at least until February 23, 2026.

This posting is for an existing vacancy.

NVIDIA uses AI tools in its recruiting processes.

NVIDIA is committed to fostering a diverse work environment and proud to be an equal opportunity employer. As we highly value diversity in our current and future employees, we do not discriminate (including in our hiring and promotion practices) on the basis of race, religion, color, national origin, gender, gender expression, sexual orientation, age, marital status, veteran status, disability status or any other characteristic protected by law.

SIMILAR OPPORTUNITIES

No similar jobs available at the moment.