LOG IN
SIGN UP
Tech Job Finder - Find Software, Technology Sales and Product Manager Jobs.
Sign In
OR continue with e-mail and password
E-mail address
Password
Don't have an account?
Reset password
Join Tech Job Finder
OR continue with e-mail and password
E-mail address
First name
Last name
Username
Password
Confirm Password
How did you hear about us?
By signing up, you agree to our Terms & Conditions and Privacy Policy.

Senior Software Engineer, Quantized Inference

at Nvidia

Back to all Python jobs
N
Industry not specified

Senior Software Engineer, Quantized Inference

at Nvidia

Mid LevelNo visa sponsorshipPython

Posted 12 hours ago

No clicks

Compensation
$152,000 – $287,500 USD

Currency: $ (USD)

City
Not specified
Country
United States

Join NVIDIA as a Senior Software Engineer for Quantized Inference to accelerate LLM inference by developing quantized and sparse recipes in inference engines (vLLM, TRT-LLM, SGLang). You will translate recipe specifications into performant code, e.g., Triton kernels and quantize/dequantize paths, and ensure per-expert scaling in MoE layers. You will own model export pipelines (ModelOpt, Megatron-LM <-&gt; HuggingFace) and build prototypes/benchmarking harnesses to evaluate throughput and interactivity before full optimization. You’ll collaborate with partner inference teams, contribute to productization across Megatron-LM, ModelOpt, and vLLM, and improve developer productivity through CI/build/training infrastructure.

We are now looking for a Senior Software Engineer for Quantized Inference! NVIDIA is seeking software engineers to accelerate the discovery and deployment of efficient inference recipes for LLMs. A recipe defines which operators are transformed into low-precision or sparsified variants — unlocking throughput and latency gains without regressing accuracy or verbosity. Recipes may incorporate techniques such as rotations, block scaling to attenuate outlier impact, or improved calibration data drawn from SFT/RL pipelines.

Each new recipe demands corresponding kernel and model-level implementations in inference engines (vLLM, TRT-LLM, SGLang). The candidate will translate recipe specifications into functionally correct, performant code, e.g., writing Triton kernels, inserting quantize/dequantize nodes into prefill and decode paths, and ensuring per-expert scaling in MoE layers is handled correctly. From there, the candidate will collaborate with partner inference teams to further optimize throughput and interactivity on target workloads. This work is a core component of our productization effort across Megatron-LM, ModelOpt, and vLLM.

What you'll be doing:

  • Implement quantized and sparse recipes in inference engines (vLLM, TRT-LLM, SGLang)

  • Own model export pipelines (ModelOpt, Megatron-LM <-> HuggingFace), ensuring quantized checkpoints serialize correctly for downstream serving

  • Build prototypes and benchmarking harnesses to evaluate recipe throughput/interactivity before full optimization

  • Develop data analysis tooling and visualizations for numerics debugging

  • Improve developer productivity across the team: CI, build systems, training infrastructure, pipeline friction

  • Participate in code reviews and incorporate feedback

What we need to see:

  • Proficient in Python; familiarity with C++

  • Strong software engineering fundamentals: concise, well-tested code; fluent with AI-assisted tooling

  • Experience with ML accelerators with a basic understanding of how certain ML layers affect execution time

  • Familiarity with PyTorch internals (custom ops, autograd, export) or equivalent framework

  • Experience reading, modifying, or contributing to a large open-source codebase

  • MS/PhD in Computer Science or related field, or equivalent experience.

  • 4+ years in a relevant software engineering role

  • Demonstrated ability to move fast with ambiguous requirements, with strong written and verbal communication

Ways to stand out from the crowd:

  • Experience contributing to inference serving frameworks (vLLM, TRT-LLM, SGLang) or Triton kernel development

  • Track record of debugging numerical issues across mixed-precision boundaries

  • Deep experience with model compression techniques: PTQ, QAT, structured/unstructured sparsity

Your base salary will be determined based on your location, experience, and the pay of employees in similar positions. The base salary range is 152,000 USD - 241,500 USD for Level 3, and 184,000 USD - 287,500 USD for Level 4.

You will also be eligible for equity and benefits.

Applications for this job will be accepted at least until March 1, 2026.

This posting is for an existing vacancy.

NVIDIA uses AI tools in its recruiting processes.

NVIDIA is committed to fostering a diverse work environment and proud to be an equal opportunity employer. As we highly value diversity in our current and future employees, we do not discriminate (including in our hiring and promotion practices) on the basis of race, religion, color, national origin, gender, gender expression, sexual orientation, age, marital status, veteran status, disability status or any other characteristic protected by law.

Senior Software Engineer, Quantized Inference

at Nvidia

Back to all Python jobs
N
Industry not specified

Senior Software Engineer, Quantized Inference

at Nvidia

Mid LevelNo visa sponsorshipPython

Posted 12 hours ago

No clicks

Compensation
$152,000 – $287,500 USD

Currency: $ (USD)

City
Not specified
Country
United States

Join NVIDIA as a Senior Software Engineer for Quantized Inference to accelerate LLM inference by developing quantized and sparse recipes in inference engines (vLLM, TRT-LLM, SGLang). You will translate recipe specifications into performant code, e.g., Triton kernels and quantize/dequantize paths, and ensure per-expert scaling in MoE layers. You will own model export pipelines (ModelOpt, Megatron-LM <-&gt; HuggingFace) and build prototypes/benchmarking harnesses to evaluate throughput and interactivity before full optimization. You’ll collaborate with partner inference teams, contribute to productization across Megatron-LM, ModelOpt, and vLLM, and improve developer productivity through CI/build/training infrastructure.

We are now looking for a Senior Software Engineer for Quantized Inference! NVIDIA is seeking software engineers to accelerate the discovery and deployment of efficient inference recipes for LLMs. A recipe defines which operators are transformed into low-precision or sparsified variants — unlocking throughput and latency gains without regressing accuracy or verbosity. Recipes may incorporate techniques such as rotations, block scaling to attenuate outlier impact, or improved calibration data drawn from SFT/RL pipelines.

Each new recipe demands corresponding kernel and model-level implementations in inference engines (vLLM, TRT-LLM, SGLang). The candidate will translate recipe specifications into functionally correct, performant code, e.g., writing Triton kernels, inserting quantize/dequantize nodes into prefill and decode paths, and ensuring per-expert scaling in MoE layers is handled correctly. From there, the candidate will collaborate with partner inference teams to further optimize throughput and interactivity on target workloads. This work is a core component of our productization effort across Megatron-LM, ModelOpt, and vLLM.

What you'll be doing:

  • Implement quantized and sparse recipes in inference engines (vLLM, TRT-LLM, SGLang)

  • Own model export pipelines (ModelOpt, Megatron-LM <-> HuggingFace), ensuring quantized checkpoints serialize correctly for downstream serving

  • Build prototypes and benchmarking harnesses to evaluate recipe throughput/interactivity before full optimization

  • Develop data analysis tooling and visualizations for numerics debugging

  • Improve developer productivity across the team: CI, build systems, training infrastructure, pipeline friction

  • Participate in code reviews and incorporate feedback

What we need to see:

  • Proficient in Python; familiarity with C++

  • Strong software engineering fundamentals: concise, well-tested code; fluent with AI-assisted tooling

  • Experience with ML accelerators with a basic understanding of how certain ML layers affect execution time

  • Familiarity with PyTorch internals (custom ops, autograd, export) or equivalent framework

  • Experience reading, modifying, or contributing to a large open-source codebase

  • MS/PhD in Computer Science or related field, or equivalent experience.

  • 4+ years in a relevant software engineering role

  • Demonstrated ability to move fast with ambiguous requirements, with strong written and verbal communication

Ways to stand out from the crowd:

  • Experience contributing to inference serving frameworks (vLLM, TRT-LLM, SGLang) or Triton kernel development

  • Track record of debugging numerical issues across mixed-precision boundaries

  • Deep experience with model compression techniques: PTQ, QAT, structured/unstructured sparsity

Your base salary will be determined based on your location, experience, and the pay of employees in similar positions. The base salary range is 152,000 USD - 241,500 USD for Level 3, and 184,000 USD - 287,500 USD for Level 4.

You will also be eligible for equity and benefits.

Applications for this job will be accepted at least until March 1, 2026.

This posting is for an existing vacancy.

NVIDIA uses AI tools in its recruiting processes.

NVIDIA is committed to fostering a diverse work environment and proud to be an equal opportunity employer. As we highly value diversity in our current and future employees, we do not discriminate (including in our hiring and promotion practices) on the basis of race, religion, color, national origin, gender, gender expression, sexual orientation, age, marital status, veteran status, disability status or any other characteristic protected by law.

SIMILAR OPPORTUNITIES

No similar jobs available at the moment.