LOG IN
SIGN UP
Tech Job Finder - Find Software, Technology Sales and Product Manager Jobs.
Sign In
OR continue with e-mail and password
E-mail address
Password
Don't have an account?
Reset password
Join Tech Job Finder
OR continue with e-mail and password
E-mail address
First name
Last name
Username
Password
Confirm Password
How did you hear about us?
By signing up, you agree to our Terms & Conditions and Privacy Policy.

Senior Software Engineer, AI Inference Systems

at Nvidia

Back to all Python jobs
N
Industry not specified

Senior Software Engineer, AI Inference Systems

at Nvidia

Tech LeadNo visa sponsorshipPython

Posted 15 hours ago

No clicks

Compensation
$184,000 – $356,500 USD

Currency: $ (USD)

City
Not specified
Country
United States

Join NVIDIA to build AI inference systems that serve large-scale models with extreme efficiency. You will architect and implement high-performance inference stacks, optimize GPU kernels and compilers, and scale workloads across multi-GPU, multi-node, and multi-cloud environments. You will collaborate with inference, compiler, scheduling, and performance teams to push the frontier of accelerated computing for AI. You will conduct and publish original research that integrates ideas and prototypes into NVIDIA’s software products.

We are seeking highly skilled and motivated software engineers to join us and build AI inference systems that serve large-scale models with extreme efficiency. You’ll architect and implement high-performance inference stacks, optimize GPU kernels and compilers, drive industry benchmarks, and scale workloads across multi-GPU, multi-node, and multi-cloud environments. You’ll collaborate across inference, compiler, scheduling, and performance teams to push the frontier of accelerated computing for AI.

What you’ll be doing:

  • Contribute features to vLLM that empower the newest models with the latest NVIDIA GPU hardware features; profile and optimize the inference framework (vLLM) with methods like speculative decoding, data/tensor/expert/pipeline-parallelism, prefill-decode disaggregation.

  • Develop, optimize, and benchmark GPU kernels (hand-tuned and compiler-generated) using techniques such as fusion, autotuning, and memory/layout optimization; build and extend high-level DSLs and compiler infrastructure to boost kernel developer productivity while approaching peak hardware utilization.

  • Define and build inference benchmarking methodologies and tools; contribute both new benchmark and NVIDIA’s submissions to the industry-leading MLPerf Inference benchmarking suite.

  • Architect the scheduling and orchestration of containerized large-scale inference deployments on GPU clusters across clouds.

  • Conduct and publish original research that pushes the pareto frontier for the field of ML Systems; survey recent publications and find a way to integrate research ideas and prototypes into NVIDIA’s software products.

What we need to see:

  • Bachelor’s degree (or equivalent expeience) in Computer Science (CS), Computer Engineering (CE) or Software Engineering (SE) with 7+ years of experience; alternatively, Master’s degree in CS/CE/SE with 5+ years of experience; or PhD degree with the thesis and top-tier publications in ML Systems, GPU architecture, or high-performance computing.

  • Strong programming skills in Python and C/C++; experience with Go or Rust is a plus; solid CS fundamentals: algorithms & data structures, operating systems, computer architecture, parallel programming, distributed systems, deep learning theories.

  • Knowledgeable and passionate about performance engineering in ML frameworks (e.g., PyTorch) and inference engines (e.g., vLLM and SGLang).

  • Familiarity with GPU programming and performance: CUDA, memory hierarchy, streams, NCCL; proficiency with profiling/debug tools (e.g., Nsight Systems/Compute).

  • Experience with containers and orchestration (Docker, Kubernetes, Slurm); familiarity with Linux namespaces and cgroups.

  • Excellent debugging, problem-solving, and communication skills; ability to excel in a fast-paced, multi-functional setting.

Ways to stand out from the crowd

  • Experience building and optimizing LLM inference engines (e.g., vLLM, SGLang).

  • Hands-on work with ML compilers and DSLs (e.g., Triton, TorchDynamo/Inductor, MLIR/LLVM, XLA), GPU libraries (e.g., CUTLASS) and features (e.g., CUDA Graph, Tensor Cores).

  • Experience contributing to containerization/virtualization technologies such as containerd/CRI-O/CRIU.

  • Experience with cloud platforms (AWS/GCP/Azure), infrastructure as code, CI/CD, and production observability.

  • Contributions to open-source projects and/or publications; please include links to GitHub pull requests, published papers and artifacts.

At NVIDIA, we believe artificial intelligence (AI) will fundamentally transform how people live and work. Our mission is to advance AI research and development to create groundbreaking technologies that enable anyone to harness the power of AI and benefit from its potential. Our team consists of experts in AI, systems and performance optimization. Our leadership includes world-renowned experts in AI systems who have received multiple academic and industry research awards. If you’re excited to build systems, kernels, and tools that make large-scale AI faster, more efficient, and easier to deploy, we’d love to hear from you.

#LI-Hybrid

Your base salary will be determined based on your location, experience, and the pay of employees in similar positions. The base salary range is 184,000 USD - 287,500 USD for Level 4, and 224,000 USD - 356,500 USD for Level 5.

You will also be eligible for equity and benefits.

Applications for this job will be accepted at least until February 28, 2026.

This posting is for an existing vacancy.

NVIDIA uses AI tools in its recruiting processes.

NVIDIA is committed to fostering a diverse work environment and proud to be an equal opportunity employer. As we highly value diversity in our current and future employees, we do not discriminate (including in our hiring and promotion practices) on the basis of race, religion, color, national origin, gender, gender expression, sexual orientation, age, marital status, veteran status, disability status or any other characteristic protected by law.

Senior Software Engineer, AI Inference Systems

at Nvidia

Back to all Python jobs
N
Industry not specified

Senior Software Engineer, AI Inference Systems

at Nvidia

Tech LeadNo visa sponsorshipPython

Posted 15 hours ago

No clicks

Compensation
$184,000 – $356,500 USD

Currency: $ (USD)

City
Not specified
Country
United States

Join NVIDIA to build AI inference systems that serve large-scale models with extreme efficiency. You will architect and implement high-performance inference stacks, optimize GPU kernels and compilers, and scale workloads across multi-GPU, multi-node, and multi-cloud environments. You will collaborate with inference, compiler, scheduling, and performance teams to push the frontier of accelerated computing for AI. You will conduct and publish original research that integrates ideas and prototypes into NVIDIA’s software products.

We are seeking highly skilled and motivated software engineers to join us and build AI inference systems that serve large-scale models with extreme efficiency. You’ll architect and implement high-performance inference stacks, optimize GPU kernels and compilers, drive industry benchmarks, and scale workloads across multi-GPU, multi-node, and multi-cloud environments. You’ll collaborate across inference, compiler, scheduling, and performance teams to push the frontier of accelerated computing for AI.

What you’ll be doing:

  • Contribute features to vLLM that empower the newest models with the latest NVIDIA GPU hardware features; profile and optimize the inference framework (vLLM) with methods like speculative decoding, data/tensor/expert/pipeline-parallelism, prefill-decode disaggregation.

  • Develop, optimize, and benchmark GPU kernels (hand-tuned and compiler-generated) using techniques such as fusion, autotuning, and memory/layout optimization; build and extend high-level DSLs and compiler infrastructure to boost kernel developer productivity while approaching peak hardware utilization.

  • Define and build inference benchmarking methodologies and tools; contribute both new benchmark and NVIDIA’s submissions to the industry-leading MLPerf Inference benchmarking suite.

  • Architect the scheduling and orchestration of containerized large-scale inference deployments on GPU clusters across clouds.

  • Conduct and publish original research that pushes the pareto frontier for the field of ML Systems; survey recent publications and find a way to integrate research ideas and prototypes into NVIDIA’s software products.

What we need to see:

  • Bachelor’s degree (or equivalent expeience) in Computer Science (CS), Computer Engineering (CE) or Software Engineering (SE) with 7+ years of experience; alternatively, Master’s degree in CS/CE/SE with 5+ years of experience; or PhD degree with the thesis and top-tier publications in ML Systems, GPU architecture, or high-performance computing.

  • Strong programming skills in Python and C/C++; experience with Go or Rust is a plus; solid CS fundamentals: algorithms & data structures, operating systems, computer architecture, parallel programming, distributed systems, deep learning theories.

  • Knowledgeable and passionate about performance engineering in ML frameworks (e.g., PyTorch) and inference engines (e.g., vLLM and SGLang).

  • Familiarity with GPU programming and performance: CUDA, memory hierarchy, streams, NCCL; proficiency with profiling/debug tools (e.g., Nsight Systems/Compute).

  • Experience with containers and orchestration (Docker, Kubernetes, Slurm); familiarity with Linux namespaces and cgroups.

  • Excellent debugging, problem-solving, and communication skills; ability to excel in a fast-paced, multi-functional setting.

Ways to stand out from the crowd

  • Experience building and optimizing LLM inference engines (e.g., vLLM, SGLang).

  • Hands-on work with ML compilers and DSLs (e.g., Triton, TorchDynamo/Inductor, MLIR/LLVM, XLA), GPU libraries (e.g., CUTLASS) and features (e.g., CUDA Graph, Tensor Cores).

  • Experience contributing to containerization/virtualization technologies such as containerd/CRI-O/CRIU.

  • Experience with cloud platforms (AWS/GCP/Azure), infrastructure as code, CI/CD, and production observability.

  • Contributions to open-source projects and/or publications; please include links to GitHub pull requests, published papers and artifacts.

At NVIDIA, we believe artificial intelligence (AI) will fundamentally transform how people live and work. Our mission is to advance AI research and development to create groundbreaking technologies that enable anyone to harness the power of AI and benefit from its potential. Our team consists of experts in AI, systems and performance optimization. Our leadership includes world-renowned experts in AI systems who have received multiple academic and industry research awards. If you’re excited to build systems, kernels, and tools that make large-scale AI faster, more efficient, and easier to deploy, we’d love to hear from you.

#LI-Hybrid

Your base salary will be determined based on your location, experience, and the pay of employees in similar positions. The base salary range is 184,000 USD - 287,500 USD for Level 4, and 224,000 USD - 356,500 USD for Level 5.

You will also be eligible for equity and benefits.

Applications for this job will be accepted at least until February 28, 2026.

This posting is for an existing vacancy.

NVIDIA uses AI tools in its recruiting processes.

NVIDIA is committed to fostering a diverse work environment and proud to be an equal opportunity employer. As we highly value diversity in our current and future employees, we do not discriminate (including in our hiring and promotion practices) on the basis of race, religion, color, national origin, gender, gender expression, sexual orientation, age, marital status, veteran status, disability status or any other characteristic protected by law.

SIMILAR OPPORTUNITIES

No similar jobs available at the moment.