LOG IN
SIGN UP
Tech Job Finder - Find Software, Technology Sales and Product Manager Jobs.
Sign In
OR continue with e-mail and password
E-mail address
Password
Don't have an account?
Reset password
Join Tech Job Finder
OR continue with e-mail and password
E-mail address
First name
Last name
Username
Password
Confirm Password
How did you hear about us?
By signing up, you agree to our Terms & Conditions and Privacy Policy.

Senior GPU Supercomputer Scheduler Engineer

at Nvidia

Back to all C/C++ jobs
N
Industry not specified

Senior GPU Supercomputer Scheduler Engineer

at Nvidia

Mid LevelNo visa sponsorshipC/C++/C#

Posted 13 hours ago

No clicks

Compensation
$152,000 – $287,500 USD

Currency: $ (USD)

City
Not specified
Country
United States

Join NVIDIA's Managed AI Research Superclusters (MARS) Scheduling team as a Senior GPU Supercomputer Scheduler Engineer. Design and develop batch scheduling features and batch workload orchestration for large multi-node GPU clusters running deep learning, HPC, and AI workloads. Focus on resource usage fairness, GPU occupancy, resilience, and performance optimization, while building automation and tooling to scale operations. Work with SLURM/K8s batch schedulers, Linux environments, containers, and performance tuning to deliver production-grade solutions.

NVIDIA is a pioneer in accelerated computing, known for inventing the GPU and driving breakthroughs in gaming, computer graphics, high-performance computing, and artificial intelligence. Our technology powers everything from generative AI to autonomous systems, and we continue to shape the future of computing through innovation and collaboration. Within this mission, our team, Managed AI Research Superclusters (MARS), builds and scales the infrastructure, platforms, and tools that enable researchers and engineers to develop the next generation of AI/ML systems. By joining us, you’ll help design solutions that power some of the world’s most advanced computing workloads.

As a member of the Scheduling team, you will participate in the design and implementation of groundbreaking GPU compute clusters that run demanding deep learning, high performance computing, and computationally intensive workloads. We seek engineers with deep technical expertise to identify architectural directions and new approaches for AI workload scheduling to serve many simultaneous and large multi-node GPU workloads with complex requirements and dependencies. This role offers you an excellent opportunity to deliver production grade solutions, get hands on with ground-breaking technology, and work closely with technical leaders solving some of the biggest challenges in machine learning, cloud computing, and system co-design.

What you'll be doing:

  • Design and develop new scheduling features and add-on services to improve GPU compute clusters across many dimensions, such as resource usage fairness, GPU occupancy, GPU waste, application resilience, application performance and power usage.

  • Design and develop batch workload management and orchestration services

  • Provide support to staff and end users to resolve batch scheduler issues

  • Build and improve our ecosystem around GPU-accelerated computing

  • Performance analysis and optimizations of deep learning workflows

  • Develop large scale automation solutions

  • Root cause analysis and suggest corrective action for problems large and small scales

  • Finding and fixing problems before they occur

What we need to see:

  • Bachelor’s degree in Computer Science, Electrical Engineering or related field or equivalent experience

  • 5+ years of work experience

  • Strong understanding of batch scheduling, preferably with experience in schedulers such as SLURM or K8s batch schedulers (Kueue, Volcano, etc.)

  • Significant experience in systems programming languages such as C/C++ & Go as well as scripting languages such as Python and bash

  • Established experience in Linux operating system, environment and tools

  • Experience analyzing and tuning performance for a variety of AI workloads

  • In-depth understating of container technologies like Docker, Singularity, Podman

  • Flexibility/adaptability for working in a dynamic environment with different frameworks and requirements

  • Excellent communication, interpersonal and customer collaboration skills

Ways to stand out from the crowd:

  • Knowledge in High-performance computing

  • Open Source Software Contribution

  • Experience with deep learning frameworks like PyTorch and TensorFlow

  • Passionate about SW development processes

Your base salary will be determined based on your location, experience, and the pay of employees in similar positions. The base salary range is 152,000 USD - 241,500 USD for Level 3, and 184,000 USD - 287,500 USD for Level 4.

You will also be eligible for equity and benefits.

Applications for this job will be accepted at least until February 24, 2026.

This posting is for an existing vacancy.

NVIDIA uses AI tools in its recruiting processes.

NVIDIA is committed to fostering a diverse work environment and proud to be an equal opportunity employer. As we highly value diversity in our current and future employees, we do not discriminate (including in our hiring and promotion practices) on the basis of race, religion, color, national origin, gender, gender expression, sexual orientation, age, marital status, veteran status, disability status or any other characteristic protected by law.

Senior GPU Supercomputer Scheduler Engineer

at Nvidia

Back to all C/C++ jobs
N
Industry not specified

Senior GPU Supercomputer Scheduler Engineer

at Nvidia

Mid LevelNo visa sponsorshipC/C++/C#

Posted 13 hours ago

No clicks

Compensation
$152,000 – $287,500 USD

Currency: $ (USD)

City
Not specified
Country
United States

Join NVIDIA's Managed AI Research Superclusters (MARS) Scheduling team as a Senior GPU Supercomputer Scheduler Engineer. Design and develop batch scheduling features and batch workload orchestration for large multi-node GPU clusters running deep learning, HPC, and AI workloads. Focus on resource usage fairness, GPU occupancy, resilience, and performance optimization, while building automation and tooling to scale operations. Work with SLURM/K8s batch schedulers, Linux environments, containers, and performance tuning to deliver production-grade solutions.

NVIDIA is a pioneer in accelerated computing, known for inventing the GPU and driving breakthroughs in gaming, computer graphics, high-performance computing, and artificial intelligence. Our technology powers everything from generative AI to autonomous systems, and we continue to shape the future of computing through innovation and collaboration. Within this mission, our team, Managed AI Research Superclusters (MARS), builds and scales the infrastructure, platforms, and tools that enable researchers and engineers to develop the next generation of AI/ML systems. By joining us, you’ll help design solutions that power some of the world’s most advanced computing workloads.

As a member of the Scheduling team, you will participate in the design and implementation of groundbreaking GPU compute clusters that run demanding deep learning, high performance computing, and computationally intensive workloads. We seek engineers with deep technical expertise to identify architectural directions and new approaches for AI workload scheduling to serve many simultaneous and large multi-node GPU workloads with complex requirements and dependencies. This role offers you an excellent opportunity to deliver production grade solutions, get hands on with ground-breaking technology, and work closely with technical leaders solving some of the biggest challenges in machine learning, cloud computing, and system co-design.

What you'll be doing:

  • Design and develop new scheduling features and add-on services to improve GPU compute clusters across many dimensions, such as resource usage fairness, GPU occupancy, GPU waste, application resilience, application performance and power usage.

  • Design and develop batch workload management and orchestration services

  • Provide support to staff and end users to resolve batch scheduler issues

  • Build and improve our ecosystem around GPU-accelerated computing

  • Performance analysis and optimizations of deep learning workflows

  • Develop large scale automation solutions

  • Root cause analysis and suggest corrective action for problems large and small scales

  • Finding and fixing problems before they occur

What we need to see:

  • Bachelor’s degree in Computer Science, Electrical Engineering or related field or equivalent experience

  • 5+ years of work experience

  • Strong understanding of batch scheduling, preferably with experience in schedulers such as SLURM or K8s batch schedulers (Kueue, Volcano, etc.)

  • Significant experience in systems programming languages such as C/C++ & Go as well as scripting languages such as Python and bash

  • Established experience in Linux operating system, environment and tools

  • Experience analyzing and tuning performance for a variety of AI workloads

  • In-depth understating of container technologies like Docker, Singularity, Podman

  • Flexibility/adaptability for working in a dynamic environment with different frameworks and requirements

  • Excellent communication, interpersonal and customer collaboration skills

Ways to stand out from the crowd:

  • Knowledge in High-performance computing

  • Open Source Software Contribution

  • Experience with deep learning frameworks like PyTorch and TensorFlow

  • Passionate about SW development processes

Your base salary will be determined based on your location, experience, and the pay of employees in similar positions. The base salary range is 152,000 USD - 241,500 USD for Level 3, and 184,000 USD - 287,500 USD for Level 4.

You will also be eligible for equity and benefits.

Applications for this job will be accepted at least until February 24, 2026.

This posting is for an existing vacancy.

NVIDIA uses AI tools in its recruiting processes.

NVIDIA is committed to fostering a diverse work environment and proud to be an equal opportunity employer. As we highly value diversity in our current and future employees, we do not discriminate (including in our hiring and promotion practices) on the basis of race, religion, color, national origin, gender, gender expression, sexual orientation, age, marital status, veteran status, disability status or any other characteristic protected by law.

SIMILAR OPPORTUNITIES

No similar jobs available at the moment.