LOG IN
SIGN UP
Tech Job Finder - Find Software, Technology Sales and Product Manager Jobs.
Sign In
OR continue with e-mail and password
E-mail address
Password
Don't have an account?
Reset password
Join Tech Job Finder
OR continue with e-mail and password
E-mail address
First name
Last name
Username
Password
Confirm Password
How did you hear about us?
By signing up, you agree to our Terms & Conditions and Privacy Policy.

Solutions Architect, Inference Deployments

at Nvidia

Back to all Data Science / AI / ML jobs
N
Industry not specified

Solutions Architect, Inference Deployments

at Nvidia

Mid LevelNo visa sponsorshipData Science/AI/ML

Posted 14 hours ago

No clicks

Compensation
$152,000 – $241,500 USD

Currency: $ (USD)

City
Not specified
Country
United States

We’re forming a team to roll out and enhance AI inference solutions at scale, demonstrating NVIDIA’s GPU technology and Kubernetes. As a Solutions Architect focused on inference, you’ll build inference pipelines, collaborate with DevOps to orchestrate disaggregated inference with Kubernetes, and accelerate pipelines using technologies like TensorRT-LLM, vLLM, and SGLang. You’ll provide mentorship and technical leadership to customers and internal teams through deploying disaggregated inference systems and resolving complex issues in enterprise environments. The role requires 5+ years of solutions architecture experience with distributed systems and AI inference workloads on Kubernetes.

We’re forming a team of innovators to roll out and enhance AI inference solutions at scale, demonstrating NVIDIA’s GPU technology and Kubernetes. As a Solutions Architect focused on inference, you’ll collaborate closely with our engineering, DevOps, and customers to develop enterprise AI solutions. Together, we'll deliver generative AI to production!

What you'll be doing:

  • Build inference pipelines with tools like NVIDIA Dynamo, distributing tasks among GPU workers to improve efficiency.

  • Collaborate with DevOps teams to orchestrate disaggregated inference using Kubernetes for complex workloads.

  • Accelerate inference pipelines using TensorRT-LLM, vLLM, SGLang, and other backends to ensure seamless integration with disaggregated inference.

  • Provide mentorship and technical leadership to customers and internal teams, guiding them through the deployment of disaggregated inference systems and resolving complex issues.

What we need to see:

  • 5+ Years in Solutions Architecture with a proven track record of deploying distributed systems and AI inference workloads on Kubernetes.

  • Experience with one of NVIDIA Dynamo, Triton Inference Server, or TensorRT-LLM for model optimization and serving.

  • GPU orchestration using NVIDIA GPU Operator, NIM Operator, and Multi-Instance GPU (MIG) partitioning.

  • Solving sophisticated GPU allocation, memory hierarchies (HBM, DRAM, SSD), and low-latency networking (RDMA, UCX).

  • Demonstrated success in tuning large language models for low-latency inference in enterprise environments.

  • BS in CS/Engineering or equivalent experience.

Ways to stand out from the crowd:

  • Prior experience deploying NVIDIA inference technologies such as Dynamo, NIM, NIXL and Grove.

  • Deep understanding of transformer neural network, and inference acceleration technologies like quantization, speculative decoding, WideEP etc.

  • NVIDIA Certified AI Engineer or similar credentials.

  • Contributions to open-source projects including NVIDIA Dynamo, vLLM, KServe, or SGLang.

Your base salary will be determined based on your location, experience, and the pay of employees in similar positions. The base salary range is 152,000 USD - 241,500 USD.

You will also be eligible for equity and benefits.

Applications for this job will be accepted at least until March 3, 2026.

This posting is for an existing vacancy.

NVIDIA uses AI tools in its recruiting processes.

NVIDIA is committed to fostering a diverse work environment and proud to be an equal opportunity employer. As we highly value diversity in our current and future employees, we do not discriminate (including in our hiring and promotion practices) on the basis of race, religion, color, national origin, gender, gender expression, sexual orientation, age, marital status, veteran status, disability status or any other characteristic protected by law.

Solutions Architect, Inference Deployments

at Nvidia

Back to all Data Science / AI / ML jobs
N
Industry not specified

Solutions Architect, Inference Deployments

at Nvidia

Mid LevelNo visa sponsorshipData Science/AI/ML

Posted 14 hours ago

No clicks

Compensation
$152,000 – $241,500 USD

Currency: $ (USD)

City
Not specified
Country
United States

We’re forming a team to roll out and enhance AI inference solutions at scale, demonstrating NVIDIA’s GPU technology and Kubernetes. As a Solutions Architect focused on inference, you’ll build inference pipelines, collaborate with DevOps to orchestrate disaggregated inference with Kubernetes, and accelerate pipelines using technologies like TensorRT-LLM, vLLM, and SGLang. You’ll provide mentorship and technical leadership to customers and internal teams through deploying disaggregated inference systems and resolving complex issues in enterprise environments. The role requires 5+ years of solutions architecture experience with distributed systems and AI inference workloads on Kubernetes.

We’re forming a team of innovators to roll out and enhance AI inference solutions at scale, demonstrating NVIDIA’s GPU technology and Kubernetes. As a Solutions Architect focused on inference, you’ll collaborate closely with our engineering, DevOps, and customers to develop enterprise AI solutions. Together, we'll deliver generative AI to production!

What you'll be doing:

  • Build inference pipelines with tools like NVIDIA Dynamo, distributing tasks among GPU workers to improve efficiency.

  • Collaborate with DevOps teams to orchestrate disaggregated inference using Kubernetes for complex workloads.

  • Accelerate inference pipelines using TensorRT-LLM, vLLM, SGLang, and other backends to ensure seamless integration with disaggregated inference.

  • Provide mentorship and technical leadership to customers and internal teams, guiding them through the deployment of disaggregated inference systems and resolving complex issues.

What we need to see:

  • 5+ Years in Solutions Architecture with a proven track record of deploying distributed systems and AI inference workloads on Kubernetes.

  • Experience with one of NVIDIA Dynamo, Triton Inference Server, or TensorRT-LLM for model optimization and serving.

  • GPU orchestration using NVIDIA GPU Operator, NIM Operator, and Multi-Instance GPU (MIG) partitioning.

  • Solving sophisticated GPU allocation, memory hierarchies (HBM, DRAM, SSD), and low-latency networking (RDMA, UCX).

  • Demonstrated success in tuning large language models for low-latency inference in enterprise environments.

  • BS in CS/Engineering or equivalent experience.

Ways to stand out from the crowd:

  • Prior experience deploying NVIDIA inference technologies such as Dynamo, NIM, NIXL and Grove.

  • Deep understanding of transformer neural network, and inference acceleration technologies like quantization, speculative decoding, WideEP etc.

  • NVIDIA Certified AI Engineer or similar credentials.

  • Contributions to open-source projects including NVIDIA Dynamo, vLLM, KServe, or SGLang.

Your base salary will be determined based on your location, experience, and the pay of employees in similar positions. The base salary range is 152,000 USD - 241,500 USD.

You will also be eligible for equity and benefits.

Applications for this job will be accepted at least until March 3, 2026.

This posting is for an existing vacancy.

NVIDIA uses AI tools in its recruiting processes.

NVIDIA is committed to fostering a diverse work environment and proud to be an equal opportunity employer. As we highly value diversity in our current and future employees, we do not discriminate (including in our hiring and promotion practices) on the basis of race, religion, color, national origin, gender, gender expression, sexual orientation, age, marital status, veteran status, disability status or any other characteristic protected by law.

SIMILAR OPPORTUNITIES

No similar jobs available at the moment.