LOG IN
SIGN UP
Tech Job Finder - Find Software, Technology Sales and Product Manager Jobs.
Sign In
OR continue with e-mail and password
E-mail address
Password
Don't have an account?
Reset password
Join Tech Job Finder
OR continue with e-mail and password
E-mail address
First name
Last name
Username
Password
Confirm Password
How did you hear about us?
By signing up, you agree to our Terms & Conditions and Privacy Policy.

Principal Software Engineer - AI Inference

at Nvidia

Back to all Data Science / AI / ML jobs
N
Industry not specified

Principal Software Engineer - AI Inference

at Nvidia

Tech LeadNo visa sponsorshipData Science/AI/ML

Posted 13 hours ago

No clicks

Compensation
$272,000 – $431,250 USD

Currency: $ (USD)

City
Not specified
Country
United States

Lead development and upstream contributions for AI inference engines (e.g., vLLM, SGLang) to optimize LLM serving on NVIDIA GPUs. This hands-on role focuses on inference-runtime features, performance engineering, and distributed multi-GPU/multi-node scaling, with collaboration across model teams, infrastructure/SRE, and product. You will drive upstream-first engineering, improve correctness and observability, and mentor senior engineers while aligning with production needs and community adoption.

NVIDIA is the platform for every new AI-powered application. We seek a Principal Software Engineer - AI Inference to advance open-source LLM serving. This role involves contributing to upstream inference engines like vLLM and SGLang. You will ensure they run outstandingly on NVIDIA GPUs and systems. You will also strengthen the underlying stack for high-throughput, low-latency inference at scale.

This is a hands-on, deeply technical role for someone who excels at the intersection of inference runtime architecture, GPU performance engineering, and distributed systems. You will collaborate closely with internal model teams, infrastructure/SRE, and product to ensure NVIDIA platforms are outstanding members in the broader inference ecosystem. You will also deliver production-grade improvements that benefit both NVIDIA and the community.

What you'll be doing:

  • Drive upstream-first engineering in vLLM/SGLang: author and land PRs or equivalent experience, engage in development discussions, help compose roadmaps, and build durable maintainer relationships.

  • Build and implement inference-runtime features that improve efficiency, latency, and tail behavior: request scheduling, batching policies, KV-cache management (paging/sharding), memory planning, and streaming.

  • Optimize core hot paths across the stack—from Python orchestration down to C++/CUDA kernels—using profiling and measurement to guide decisions.

  • Improve multi-GPU and multi-node inference: communication patterns, parallelism strategies (tensor/sequence/pipeline), and system-level scaling/efficiency.

  • Strengthen correctness, robustness, and operability: determinism where needed, graceful degradation, backpressure, observability hooks, and performance regression testing.

  • Collaborate across NVIDIA to integrate upstream advances with production needs (deployment patterns, compatibility, security posture) while keeping changes broadly adoptable by the community.

  • Mentor senior engineers, raise the technical bar through build reviews, and establish guidelines for performance engineering and upstream contribution workflows.

What we need to see:

  • 15+ years building production software with significant depth in systems engineering; strong track record of owning ambiguous, high-impact technical problems end-to-end.

  • Demonstrated expertise in LLM inference/serving systems (e.g., vLLM, SGLang) and the tradeoffs that drive real production performance.

  • Strong programming skills in Rust, C++, Python, CUDA; ability to read, modify, and optimize performance-critical code across layers.

  • Experience with GPU performance analysis tools and methodologies (profiling, microbenchmarking, memory/comms analysis) and a strong measurement culture.

  • Solid foundation in distributed systems and concurrency: queues/schedulers, RPC/streaming, multi-process/multi-threaded runtime behavior, and scaling patterns across nodes.

  • Excellent communication skills; ability to influence across teams and represent NVIDIA well in open-source technical forums.

  • BS/MS in Computer Science, Computer Engineering, or related field (or equivalent experience).

Ways to stand out from the crowd:

  • Substantial open-source contributions to vLLM, SGLang, PyTorch, Triton, NCCL, or related GPU/inference infrastructure; prior maintainer experience is a plus.

  • Shipped performance features such as paged attention/KV paging, speculative decoding, advanced scheduling, quantization-aware serving, or low-latency streaming optimizations.

  • Experience optimizing inference across the full stack: tokenizer and Python runtime overheads, kernel fusion, memory bandwidth, PCIe/NVLink effects, and network fabrics (e.g., InfiniBand).

  • Built robust benchmarking and regression infrastructure for latency and efficiency, including dataset selection, load modeling, and reproducible performance tracking.

Your base salary will be determined based on your location, experience, and the pay of employees in similar positions. The base salary range is 272,000 USD - 431,250 USD.

You will also be eligible for equity and benefits.

Applications for this job will be accepted at least until February 27, 2026.

This posting is for an existing vacancy.

NVIDIA uses AI tools in its recruiting processes.

NVIDIA is committed to fostering a diverse work environment and proud to be an equal opportunity employer. As we highly value diversity in our current and future employees, we do not discriminate (including in our hiring and promotion practices) on the basis of race, religion, color, national origin, gender, gender expression, sexual orientation, age, marital status, veteran status, disability status or any other characteristic protected by law.

Principal Software Engineer - AI Inference

at Nvidia

Back to all Data Science / AI / ML jobs
N
Industry not specified

Principal Software Engineer - AI Inference

at Nvidia

Tech LeadNo visa sponsorshipData Science/AI/ML

Posted 13 hours ago

No clicks

Compensation
$272,000 – $431,250 USD

Currency: $ (USD)

City
Not specified
Country
United States

Lead development and upstream contributions for AI inference engines (e.g., vLLM, SGLang) to optimize LLM serving on NVIDIA GPUs. This hands-on role focuses on inference-runtime features, performance engineering, and distributed multi-GPU/multi-node scaling, with collaboration across model teams, infrastructure/SRE, and product. You will drive upstream-first engineering, improve correctness and observability, and mentor senior engineers while aligning with production needs and community adoption.

NVIDIA is the platform for every new AI-powered application. We seek a Principal Software Engineer - AI Inference to advance open-source LLM serving. This role involves contributing to upstream inference engines like vLLM and SGLang. You will ensure they run outstandingly on NVIDIA GPUs and systems. You will also strengthen the underlying stack for high-throughput, low-latency inference at scale.

This is a hands-on, deeply technical role for someone who excels at the intersection of inference runtime architecture, GPU performance engineering, and distributed systems. You will collaborate closely with internal model teams, infrastructure/SRE, and product to ensure NVIDIA platforms are outstanding members in the broader inference ecosystem. You will also deliver production-grade improvements that benefit both NVIDIA and the community.

What you'll be doing:

  • Drive upstream-first engineering in vLLM/SGLang: author and land PRs or equivalent experience, engage in development discussions, help compose roadmaps, and build durable maintainer relationships.

  • Build and implement inference-runtime features that improve efficiency, latency, and tail behavior: request scheduling, batching policies, KV-cache management (paging/sharding), memory planning, and streaming.

  • Optimize core hot paths across the stack—from Python orchestration down to C++/CUDA kernels—using profiling and measurement to guide decisions.

  • Improve multi-GPU and multi-node inference: communication patterns, parallelism strategies (tensor/sequence/pipeline), and system-level scaling/efficiency.

  • Strengthen correctness, robustness, and operability: determinism where needed, graceful degradation, backpressure, observability hooks, and performance regression testing.

  • Collaborate across NVIDIA to integrate upstream advances with production needs (deployment patterns, compatibility, security posture) while keeping changes broadly adoptable by the community.

  • Mentor senior engineers, raise the technical bar through build reviews, and establish guidelines for performance engineering and upstream contribution workflows.

What we need to see:

  • 15+ years building production software with significant depth in systems engineering; strong track record of owning ambiguous, high-impact technical problems end-to-end.

  • Demonstrated expertise in LLM inference/serving systems (e.g., vLLM, SGLang) and the tradeoffs that drive real production performance.

  • Strong programming skills in Rust, C++, Python, CUDA; ability to read, modify, and optimize performance-critical code across layers.

  • Experience with GPU performance analysis tools and methodologies (profiling, microbenchmarking, memory/comms analysis) and a strong measurement culture.

  • Solid foundation in distributed systems and concurrency: queues/schedulers, RPC/streaming, multi-process/multi-threaded runtime behavior, and scaling patterns across nodes.

  • Excellent communication skills; ability to influence across teams and represent NVIDIA well in open-source technical forums.

  • BS/MS in Computer Science, Computer Engineering, or related field (or equivalent experience).

Ways to stand out from the crowd:

  • Substantial open-source contributions to vLLM, SGLang, PyTorch, Triton, NCCL, or related GPU/inference infrastructure; prior maintainer experience is a plus.

  • Shipped performance features such as paged attention/KV paging, speculative decoding, advanced scheduling, quantization-aware serving, or low-latency streaming optimizations.

  • Experience optimizing inference across the full stack: tokenizer and Python runtime overheads, kernel fusion, memory bandwidth, PCIe/NVLink effects, and network fabrics (e.g., InfiniBand).

  • Built robust benchmarking and regression infrastructure for latency and efficiency, including dataset selection, load modeling, and reproducible performance tracking.

Your base salary will be determined based on your location, experience, and the pay of employees in similar positions. The base salary range is 272,000 USD - 431,250 USD.

You will also be eligible for equity and benefits.

Applications for this job will be accepted at least until February 27, 2026.

This posting is for an existing vacancy.

NVIDIA uses AI tools in its recruiting processes.

NVIDIA is committed to fostering a diverse work environment and proud to be an equal opportunity employer. As we highly value diversity in our current and future employees, we do not discriminate (including in our hiring and promotion practices) on the basis of race, religion, color, national origin, gender, gender expression, sexual orientation, age, marital status, veteran status, disability status or any other characteristic protected by law.

SIMILAR OPPORTUNITIES

No similar jobs available at the moment.