LOG IN
SIGN UP
Tech Job Finder - Find Software, Technology Sales and Product Manager Jobs.
Sign In
OR continue with e-mail and password
E-mail address
Password
Don't have an account?
Reset password
Join Tech Job Finder
OR continue with e-mail and password
E-mail address
First name
Last name
Username
Password
Confirm Password
How did you hear about us?
By signing up, you agree to our Terms & Conditions and Privacy Policy.

Senior DL Algorithms Engineer - Inference Performance

at Nvidia

Back to all Data Science / AI / ML jobs
N
Industry not specified

Senior DL Algorithms Engineer - Inference Performance

at Nvidia

Mid LevelNo visa sponsorshipData Science/AI/ML

Posted 10 hours ago

No clicks

Compensation
$184,000 – $356,500 USD

Currency: $ (USD)

City
Not specified
Country
United States

We are seeking a Senior DL Algorithms Engineer focused on inference performance. You will optimize and analyze NVIDIA's Deep Learning inference workloads across the stack, implement language and multimodal model inference for NVIDIA Inference Microservices (NIMs), and contribute features to TRT-LLM. The role involves profiling bottlenecks, benchmarking state-of-the-art DL model inference, and collaborating with SW/HW co-design teams to push the performance envelope for AI-powered services.

We are now looking for a Senior DL Algorithms Engineer! NVIDIA is seeking senior engineers who are mindful of performance analysis and optimization to help us squeeze every last clock cycle out of Deep Learning workloads. If you are unafraid to work across all layers of the hardware/software stack from GPU architecture to Deep Learning Framework to achieve peak performance, we want to hear from you! This role offers an opportunity to directly impact the hardware and software roadmap in a fast-growing technology company that leads the AI revolution.

What you will be doing:

  • Implement language and multimodal model inference as part of NVIDIA Inference Microservices (NIMs).

  • Contribute new features, fix bugs and deliver production code to TRT-LLM, NVIDIA’s open-source inference serving library.

  • Profile and analyze bottlenecks across the full inference stack to push the boundaries of inference performance.

  • Benchmark state-of-the-art offerings in various DL models inference and perform competitive analysis for NVIDIA SW/HW stack.

  • Collaborate heavily with other SW/HW co-design teams to enable the creation of the next generation of AI-powered services.

What we want to see:

  • PhD in CS, EE or CSEE or equivalent experience.

  • 5+ years of experience.

  • Strong background in deep learning and neural networks, in particular inference.

  • Experience with performance profiling, analysis and optimization, especially for GPU-based applications.

  • Proficient in C++, PyTorch or equivalent frameworks.

  • Deep understanding of computer architecture, and familiarity with the fundamentals of GPU architecture.

Ways to stand out from the crowd:

  • Proven experience with processor and system-level performance optimization.

  • Deep understanding of modern LLM architectures.

  • Strong fundamentals in algorithms.

  • GPU programming experience (CUDA or OpenCL) is a plus

Your base salary will be determined based on your location, experience, and the pay of employees in similar positions. The base salary range is 184,000 USD - 287,500 USD for Level 4, and 224,000 USD - 356,500 USD for Level 5.

You will also be eligible for equity and benefits.

Applications for this job will be accepted at least until February 22, 2026.

This posting is for an existing vacancy.

NVIDIA uses AI tools in its recruiting processes.

NVIDIA is committed to fostering a diverse work environment and proud to be an equal opportunity employer. As we highly value diversity in our current and future employees, we do not discriminate (including in our hiring and promotion practices) on the basis of race, religion, color, national origin, gender, gender expression, sexual orientation, age, marital status, veteran status, disability status or any other characteristic protected by law.

Senior DL Algorithms Engineer - Inference Performance

at Nvidia

Back to all Data Science / AI / ML jobs
N
Industry not specified

Senior DL Algorithms Engineer - Inference Performance

at Nvidia

Mid LevelNo visa sponsorshipData Science/AI/ML

Posted 10 hours ago

No clicks

Compensation
$184,000 – $356,500 USD

Currency: $ (USD)

City
Not specified
Country
United States

We are seeking a Senior DL Algorithms Engineer focused on inference performance. You will optimize and analyze NVIDIA's Deep Learning inference workloads across the stack, implement language and multimodal model inference for NVIDIA Inference Microservices (NIMs), and contribute features to TRT-LLM. The role involves profiling bottlenecks, benchmarking state-of-the-art DL model inference, and collaborating with SW/HW co-design teams to push the performance envelope for AI-powered services.

We are now looking for a Senior DL Algorithms Engineer! NVIDIA is seeking senior engineers who are mindful of performance analysis and optimization to help us squeeze every last clock cycle out of Deep Learning workloads. If you are unafraid to work across all layers of the hardware/software stack from GPU architecture to Deep Learning Framework to achieve peak performance, we want to hear from you! This role offers an opportunity to directly impact the hardware and software roadmap in a fast-growing technology company that leads the AI revolution.

What you will be doing:

  • Implement language and multimodal model inference as part of NVIDIA Inference Microservices (NIMs).

  • Contribute new features, fix bugs and deliver production code to TRT-LLM, NVIDIA’s open-source inference serving library.

  • Profile and analyze bottlenecks across the full inference stack to push the boundaries of inference performance.

  • Benchmark state-of-the-art offerings in various DL models inference and perform competitive analysis for NVIDIA SW/HW stack.

  • Collaborate heavily with other SW/HW co-design teams to enable the creation of the next generation of AI-powered services.

What we want to see:

  • PhD in CS, EE or CSEE or equivalent experience.

  • 5+ years of experience.

  • Strong background in deep learning and neural networks, in particular inference.

  • Experience with performance profiling, analysis and optimization, especially for GPU-based applications.

  • Proficient in C++, PyTorch or equivalent frameworks.

  • Deep understanding of computer architecture, and familiarity with the fundamentals of GPU architecture.

Ways to stand out from the crowd:

  • Proven experience with processor and system-level performance optimization.

  • Deep understanding of modern LLM architectures.

  • Strong fundamentals in algorithms.

  • GPU programming experience (CUDA or OpenCL) is a plus

Your base salary will be determined based on your location, experience, and the pay of employees in similar positions. The base salary range is 184,000 USD - 287,500 USD for Level 4, and 224,000 USD - 356,500 USD for Level 5.

You will also be eligible for equity and benefits.

Applications for this job will be accepted at least until February 22, 2026.

This posting is for an existing vacancy.

NVIDIA uses AI tools in its recruiting processes.

NVIDIA is committed to fostering a diverse work environment and proud to be an equal opportunity employer. As we highly value diversity in our current and future employees, we do not discriminate (including in our hiring and promotion practices) on the basis of race, religion, color, national origin, gender, gender expression, sexual orientation, age, marital status, veteran status, disability status or any other characteristic protected by law.

SIMILAR OPPORTUNITIES

No similar jobs available at the moment.