LOG IN
SIGN UP
Tech Job Finder - Find Software, Technology Sales and Product Manager Jobs.
Sign In
OR continue with e-mail and password
E-mail address
Password
Don't have an account?
Reset password
Join Tech Job Finder
OR continue with e-mail and password
E-mail address
First name
Last name
Username
Password
Confirm Password
How did you hear about us?
By signing up, you agree to our Terms & Conditions and Privacy Policy.

Senior Performance Engineer - Deep Learning

at Nvidia

Back to all Data Science / AI / ML jobs
Nvidia logo
Industry not specified

Senior Performance Engineer - Deep Learning

at Nvidia

Mid LevelNo visa sponsorshipData Science/AI/ML

Posted 3 hours ago

No clicks

Compensation
$152,000 – $287,500 USD

Currency: $ (USD)

City
Not specified
Country
United States

Join NVIDIA's Deep Learning performance engineering team to build and optimize libraries and tools that accelerate DL researchers and engineers in designing, developing, and deploying efficient AI applications. You will help advance Transformer Engine and collaborate on systems research to improve model performance, including low-precision training and multi-GPU scaling. The role involves implementing, benchmarking, and optimizing new DL models on NVIDIA GPUs, contributing to MLPerf benchmarks, and engaging with open-source communities and enterprise partners. The work also influences hardware and core software design as NVIDIA continues to push the state-of-the-art in AI.

Our Deep Learning models performance engineering team at NVIDIA is hiring software engineers at all experience levels to build and optimize the libraries and tools that enable Deep Learning Researchers and Engineers to design, develop, and deploy efficient AI applications. We are an ambitious and diverse team that builds optimizations directly into mainstream open source Deep Learning frameworks - PyTorch and JAX, which boost the performance at all levels of NVIDIA's AI stack. Our team has a wide collaborative footprint, working not only with multiple teams across NVIDIA but also with the broader open-source community to deliver SOTA Deep Learning performance on the best AI platform in the world!

What you will be doing:

  • Build and support Transformer Engine, the open-source library for accelerating the training of Large Language Models.

  • Collaborate on systems research that improves Deep Learning model performance, such as training using extremely low precision, parallelism methods, etc.

  • Implement, benchmark, and optimize new Deep Learning models such as LLMs straight out of groundbreaking research to scale efficiently on NVIDIA GPUs and systems.

  • Build and contribute to NVIDIA submissions on community benchmarks such as MLPerf.

  • Engage with the open-source community as well as support enterprise customers and partners by delivering the benefits of NVIDIA’s latest hardware and software innovations.

  • Influence the design of new hardware generations and core platform software components for NVIDIA hardware and systems.

What we need to see:

  • BS or equivalent experience in Computer Science, Electrical Engineering, or a related field.

  • 3+ years of experience in C++ and Python programming.

  • Strong background, experience, or coursework in parallel systems programming, preferably on GPUs.

  • Knowledge of Computer Architecture, Code Optimization, and/or Operating Systems.

  • Proven experience in developing large software projects.

  • Excellent verbal and written communication skills.

Ways to stand out from the crowd:

  • Experience in PyTorch, JAX, or any other DL framework.

  • Experience with performance analysis, profiling, and code optimization techniques, especially with multi-GPU or multi-node systems.

  • Knowledge of modern LLM architectures, attention mechanisms, and/or low-level DL libraries such as cuBLAS, cuDNN, and cuSOLVER.

  • Experience in writing GPU kernels using any of - CUDA, OpenAI Triton, CuTeDSL, Pallas, or other similar libraries.

  • Any past contributions to the open source community and/or experience working with multidisciplinary teams also showcase readiness for the team's responsibilities.

Your base salary will be determined based on your location, experience, and the pay of employees in similar positions. The base salary range is 152,000 USD - 241,500 USD for Level 3, and 184,000 USD - 287,500 USD for Level 4.

You will also be eligible for equity and benefits.

Applications for this job will be accepted at least until March 8, 2026.

This posting is for an existing vacancy.

NVIDIA uses AI tools in its recruiting processes.

NVIDIA is committed to fostering a diverse work environment and proud to be an equal opportunity employer. As we highly value diversity in our current and future employees, we do not discriminate (including in our hiring and promotion practices) on the basis of race, religion, color, national origin, gender, gender expression, sexual orientation, age, marital status, veteran status, disability status or any other characteristic protected by law.

Senior Performance Engineer - Deep Learning

at Nvidia

Back to all Data Science / AI / ML jobs
Nvidia logo
Industry not specified

Senior Performance Engineer - Deep Learning

at Nvidia

Mid LevelNo visa sponsorshipData Science/AI/ML

Posted 3 hours ago

No clicks

Compensation
$152,000 – $287,500 USD

Currency: $ (USD)

City
Not specified
Country
United States

Join NVIDIA's Deep Learning performance engineering team to build and optimize libraries and tools that accelerate DL researchers and engineers in designing, developing, and deploying efficient AI applications. You will help advance Transformer Engine and collaborate on systems research to improve model performance, including low-precision training and multi-GPU scaling. The role involves implementing, benchmarking, and optimizing new DL models on NVIDIA GPUs, contributing to MLPerf benchmarks, and engaging with open-source communities and enterprise partners. The work also influences hardware and core software design as NVIDIA continues to push the state-of-the-art in AI.

Our Deep Learning models performance engineering team at NVIDIA is hiring software engineers at all experience levels to build and optimize the libraries and tools that enable Deep Learning Researchers and Engineers to design, develop, and deploy efficient AI applications. We are an ambitious and diverse team that builds optimizations directly into mainstream open source Deep Learning frameworks - PyTorch and JAX, which boost the performance at all levels of NVIDIA's AI stack. Our team has a wide collaborative footprint, working not only with multiple teams across NVIDIA but also with the broader open-source community to deliver SOTA Deep Learning performance on the best AI platform in the world!

What you will be doing:

  • Build and support Transformer Engine, the open-source library for accelerating the training of Large Language Models.

  • Collaborate on systems research that improves Deep Learning model performance, such as training using extremely low precision, parallelism methods, etc.

  • Implement, benchmark, and optimize new Deep Learning models such as LLMs straight out of groundbreaking research to scale efficiently on NVIDIA GPUs and systems.

  • Build and contribute to NVIDIA submissions on community benchmarks such as MLPerf.

  • Engage with the open-source community as well as support enterprise customers and partners by delivering the benefits of NVIDIA’s latest hardware and software innovations.

  • Influence the design of new hardware generations and core platform software components for NVIDIA hardware and systems.

What we need to see:

  • BS or equivalent experience in Computer Science, Electrical Engineering, or a related field.

  • 3+ years of experience in C++ and Python programming.

  • Strong background, experience, or coursework in parallel systems programming, preferably on GPUs.

  • Knowledge of Computer Architecture, Code Optimization, and/or Operating Systems.

  • Proven experience in developing large software projects.

  • Excellent verbal and written communication skills.

Ways to stand out from the crowd:

  • Experience in PyTorch, JAX, or any other DL framework.

  • Experience with performance analysis, profiling, and code optimization techniques, especially with multi-GPU or multi-node systems.

  • Knowledge of modern LLM architectures, attention mechanisms, and/or low-level DL libraries such as cuBLAS, cuDNN, and cuSOLVER.

  • Experience in writing GPU kernels using any of - CUDA, OpenAI Triton, CuTeDSL, Pallas, or other similar libraries.

  • Any past contributions to the open source community and/or experience working with multidisciplinary teams also showcase readiness for the team's responsibilities.

Your base salary will be determined based on your location, experience, and the pay of employees in similar positions. The base salary range is 152,000 USD - 241,500 USD for Level 3, and 184,000 USD - 287,500 USD for Level 4.

You will also be eligible for equity and benefits.

Applications for this job will be accepted at least until March 8, 2026.

This posting is for an existing vacancy.

NVIDIA uses AI tools in its recruiting processes.

NVIDIA is committed to fostering a diverse work environment and proud to be an equal opportunity employer. As we highly value diversity in our current and future employees, we do not discriminate (including in our hiring and promotion practices) on the basis of race, religion, color, national origin, gender, gender expression, sexual orientation, age, marital status, veteran status, disability status or any other characteristic protected by law.

SIMILAR OPPORTUNITIES

No similar jobs available at the moment.