LOG IN
SIGN UP
Tech Job Finder - Find Software, Technology Sales and Product Manager Jobs.
Sign In
OR continue with e-mail and password
E-mail address
Password
Don't have an account?
Reset password
Join Tech Job Finder
OR continue with e-mail and password
E-mail address
First name
Last name
Username
Password
Confirm Password
How did you hear about us?
By signing up, you agree to our Terms & Conditions and Privacy Policy.

Senior Applied Deep Learning Research Scientist, Efficiency

at Nvidia

Back to all Data Science / AI / ML jobs
N
Industry not specified

Senior Applied Deep Learning Research Scientist, Efficiency

at Nvidia

Mid LevelNo visa sponsorshipData Science/AI/ML

Posted 9 hours ago

No clicks

Compensation
$224,000 – $356,500 USD

Currency: $ (USD)

City
Not specified
Country
United States

Join NVIDIA’s Applied Deep Learning Research (ADLR) – Efficiency team to make deep learning faster and more energy-efficient on GPUs. You will research low-bit number representations and pruning and their impact on inference and training accuracy, including co-designing future neural architectures and optimizers. You will run large-scale deep learning experiments to validate ideas and analyze the effects of efficiency improvements, and you will innovate with new algorithms to improve efficiency while preserving accuracy. You will collaborate across hardware, software and DL architectures and, where appropriate, publish or open-source your results.

We are now looking for an Applied Deep Learning Research Scientist, Efficiency!

Join our ADLR – Efficiency team to make deep learning faster and consume less energy! Our team influences the next-generation hardware to make AI more efficient; we work on the Nemotron series of models to make our state-of-the-art deep learning models the most efficient OSS models out there; and we develop new technology, software and algorithms to optimize neural networks for training and deployment. Topics include quantization/sparsity/optimizers/reinforcement learning, efficient architectures and pre-training. Our team is located inside the Nemotron pre-training team, collaborating across the company to make Nvidia GPUs the most efficient AI platform possible. Our work quite literally reaches the entire deep learning world. We are looking for applied researchers that want to develop new technologies for efficiency - and who want to understand the ‘why’ in efficiency, getting to the root-cause of why things do or do not work, and using that knowledge to develop new algorithms, numeric formats and architecture improvements.

What you'll be doing:

  • Research of low-bit number representations and pruning and their effect on neural network inference and training accuracy. This includes requirements by the existing state of art neural networks, as well as co-design of future neural network architectures and optimizers.

  • Innovate with new algorithms to make deep learning more efficient while retaining accuracy, and open-source or publish these algorithms for the world to use.

  • Run large-scale deep learning experiments to prove out ideas and analyze the effects of efficiency improvements.

  • Collaborate across the company with teams making the hardware, software and deep learning architectures.

What we need to see:

  • PhD degree in AI, computer science, computer engineering, math or a related field or equivalent experience in some of the areas listed below can substitute for an advanced degree.

  • 5+ years of relevant industrial research experience.

  • Familiarity with state-of-art neural network architectures, optimizers and LLM training.

  • Experience with modern DL training frameworks and/or inference engines.

  • Fluency in Python, and solid coding/software-engineering practices

  • A proven track-record in publications and/or the ability to run large-scale experiments

  • A strong interest in neural network efficiency

Ways to stand out from the crowd:

  • Experience in quantization, pruning, numerics and efficient architectures.

  • A background in computer architecture

  • Experience with GPU computing, kernels, CUDA programming and/or performance analysis

Your base salary will be determined based on your location, experience, and the pay of employees in similar positions. The base salary range is 192,000 USD - 304,750 USD for Level 4, and 224,000 USD - 356,500 USD for Level 5.

You will also be eligible for equity and benefits.

Applications for this job will be accepted at least until February 8, 2026.

This posting is for an existing vacancy.

NVIDIA uses AI tools in its recruiting processes.

NVIDIA is committed to fostering a diverse work environment and proud to be an equal opportunity employer. As we highly value diversity in our current and future employees, we do not discriminate (including in our hiring and promotion practices) on the basis of race, religion, color, national origin, gender, gender expression, sexual orientation, age, marital status, veteran status, disability status or any other characteristic protected by law.

Senior Applied Deep Learning Research Scientist, Efficiency

at Nvidia

Back to all Data Science / AI / ML jobs
N
Industry not specified

Senior Applied Deep Learning Research Scientist, Efficiency

at Nvidia

Mid LevelNo visa sponsorshipData Science/AI/ML

Posted 9 hours ago

No clicks

Compensation
$224,000 – $356,500 USD

Currency: $ (USD)

City
Not specified
Country
United States

Join NVIDIA’s Applied Deep Learning Research (ADLR) – Efficiency team to make deep learning faster and more energy-efficient on GPUs. You will research low-bit number representations and pruning and their impact on inference and training accuracy, including co-designing future neural architectures and optimizers. You will run large-scale deep learning experiments to validate ideas and analyze the effects of efficiency improvements, and you will innovate with new algorithms to improve efficiency while preserving accuracy. You will collaborate across hardware, software and DL architectures and, where appropriate, publish or open-source your results.

We are now looking for an Applied Deep Learning Research Scientist, Efficiency!

Join our ADLR – Efficiency team to make deep learning faster and consume less energy! Our team influences the next-generation hardware to make AI more efficient; we work on the Nemotron series of models to make our state-of-the-art deep learning models the most efficient OSS models out there; and we develop new technology, software and algorithms to optimize neural networks for training and deployment. Topics include quantization/sparsity/optimizers/reinforcement learning, efficient architectures and pre-training. Our team is located inside the Nemotron pre-training team, collaborating across the company to make Nvidia GPUs the most efficient AI platform possible. Our work quite literally reaches the entire deep learning world. We are looking for applied researchers that want to develop new technologies for efficiency - and who want to understand the ‘why’ in efficiency, getting to the root-cause of why things do or do not work, and using that knowledge to develop new algorithms, numeric formats and architecture improvements.

What you'll be doing:

  • Research of low-bit number representations and pruning and their effect on neural network inference and training accuracy. This includes requirements by the existing state of art neural networks, as well as co-design of future neural network architectures and optimizers.

  • Innovate with new algorithms to make deep learning more efficient while retaining accuracy, and open-source or publish these algorithms for the world to use.

  • Run large-scale deep learning experiments to prove out ideas and analyze the effects of efficiency improvements.

  • Collaborate across the company with teams making the hardware, software and deep learning architectures.

What we need to see:

  • PhD degree in AI, computer science, computer engineering, math or a related field or equivalent experience in some of the areas listed below can substitute for an advanced degree.

  • 5+ years of relevant industrial research experience.

  • Familiarity with state-of-art neural network architectures, optimizers and LLM training.

  • Experience with modern DL training frameworks and/or inference engines.

  • Fluency in Python, and solid coding/software-engineering practices

  • A proven track-record in publications and/or the ability to run large-scale experiments

  • A strong interest in neural network efficiency

Ways to stand out from the crowd:

  • Experience in quantization, pruning, numerics and efficient architectures.

  • A background in computer architecture

  • Experience with GPU computing, kernels, CUDA programming and/or performance analysis

Your base salary will be determined based on your location, experience, and the pay of employees in similar positions. The base salary range is 192,000 USD - 304,750 USD for Level 4, and 224,000 USD - 356,500 USD for Level 5.

You will also be eligible for equity and benefits.

Applications for this job will be accepted at least until February 8, 2026.

This posting is for an existing vacancy.

NVIDIA uses AI tools in its recruiting processes.

NVIDIA is committed to fostering a diverse work environment and proud to be an equal opportunity employer. As we highly value diversity in our current and future employees, we do not discriminate (including in our hiring and promotion practices) on the basis of race, religion, color, national origin, gender, gender expression, sexual orientation, age, marital status, veteran status, disability status or any other characteristic protected by law.

SIMILAR OPPORTUNITIES

No similar jobs available at the moment.