LOG IN
SIGN UP
Tech Job Finder - Find Software, Technology Sales and Product Manager Jobs.
Sign In
OR continue with e-mail and password
E-mail address
Password
Don't have an account?
Reset password
Join Tech Job Finder
OR continue with e-mail and password
E-mail address
First name
Last name
Username
Password
Confirm Password
How did you hear about us?
By signing up, you agree to our Terms & Conditions and Privacy Policy.

Senior Deep Learning Performance Architect

at Nvidia

Back to all Python jobs
N
Industry not specified

Senior Deep Learning Performance Architect

at Nvidia

Mid LevelNo visa sponsorshipPython

Posted 9 hours ago

No clicks

Compensation
$184,000 – $287,500 USD

Currency: $ (USD)

City
San Francisco, Palo Alto
Country
United States

Seeking a Senior Deep Learning Performance Architect to analyze and develop architectures that accelerate AI and HPC workloads. You will develop innovative hardware architectures to improve parallel computing performance, energy efficiency, and programmability, and build mathematical frameworks to reason about system availability and workload goodput at massive scales. You will also explore scheduling, parallelization, and resiliency strategies, run what-if studies on hardware configurations, and build/refine high-level simulators in Python to model performance interactions and guide the hardware/software roadmap.

We are now looking for a Senior Deep Learning Performance Architect! NVIDIA is seeking outstanding Performance Architects to help analyze and develop the next generation of architectures that accelerate AI and high-performance computing applications. Intelligent machines powered by Artificial Intelligence computers that can learn, reason and interact with people are no longer science fiction. GPU Deep Learning has provided the foundation for machines to learn, perceive, reason and solve problems. NVIDIA's GPUs run AI algorithms, simulating human intelligence, and act as the brains of computers, robots and self-driving cars that can perceive and understand the world. Come, join our Deep Learning Architecture team, where you can help build real-time, cost-effective computing platforms driving our success in this exciting and rapidly growing field!

What you’ll be doing:

  • Develop innovative HW architectures to extend the state of the art in parallel computing performance, energy efficiency and programmability.

  • Build the mathematical frameworks required to reason about system availability and workload goodput at massive scales.

  • Reason about overall Deep Learning workload performance under various scheduling, parallelization, and resiliency strategies.

  • Conduct "what-if" studies on hardware configurations, infrastructure knobs, and workload strategies to identify optimal system-level trade-offs.

  • Work closely with wider architecture and product teams to guide the hardware/software roadmap using data-driven performance and reliability projections.

  • Build and refine high-level simulators in python to model the interaction between knobs that impact performance and resiliency.

What we need to see:

  • MS or PhD in a Computer Science, Computer Engineering, Electrical Engineering or equivalent experience.

  • 6+ years of relevant industry or research work experience.

  • Strong background in analytical and probabilistic modeling.

  • 2+ years of experience in parallel computing architectures, distributed systems, or interconnect fabrics.

  • A strong understanding of distributed deep learning workloads scheduling in large scale systems.

  • Proficiency in Python for building performance and reliability models.

Ways to stand out from the crowd:

  • Direct experience managing or troubleshooting large-scale jobs—you understand how jobs actually fail and recover in production.

  • Experience working with large-scale operational datasets (e.g., scheduler or hardware telemetry).

  • Knowledge of how orchestrators (e.g., Slurm, Kubernetes, PyTorch) manage workload recovery and job scheduling under failures.

  • Ability to simplify and communicate rich technical concepts with a non-technical audience.

Your base salary will be determined based on your location, experience, and the pay of employees in similar positions. The base salary range is 184,000 USD - 287,500 USD.

You will also be eligible for equity and benefits.

Applications for this job will be accepted at least until February 8, 2026.

This posting is for an existing vacancy.

NVIDIA uses AI tools in its recruiting processes.

NVIDIA is committed to fostering a diverse work environment and proud to be an equal opportunity employer. As we highly value diversity in our current and future employees, we do not discriminate (including in our hiring and promotion practices) on the basis of race, religion, color, national origin, gender, gender expression, sexual orientation, age, marital status, veteran status, disability status or any other characteristic protected by law.

Senior Deep Learning Performance Architect

at Nvidia

Back to all Python jobs
N
Industry not specified

Senior Deep Learning Performance Architect

at Nvidia

Mid LevelNo visa sponsorshipPython

Posted 9 hours ago

No clicks

Compensation
$184,000 – $287,500 USD

Currency: $ (USD)

City
San Francisco, Palo Alto
Country
United States

Seeking a Senior Deep Learning Performance Architect to analyze and develop architectures that accelerate AI and HPC workloads. You will develop innovative hardware architectures to improve parallel computing performance, energy efficiency, and programmability, and build mathematical frameworks to reason about system availability and workload goodput at massive scales. You will also explore scheduling, parallelization, and resiliency strategies, run what-if studies on hardware configurations, and build/refine high-level simulators in Python to model performance interactions and guide the hardware/software roadmap.

We are now looking for a Senior Deep Learning Performance Architect! NVIDIA is seeking outstanding Performance Architects to help analyze and develop the next generation of architectures that accelerate AI and high-performance computing applications. Intelligent machines powered by Artificial Intelligence computers that can learn, reason and interact with people are no longer science fiction. GPU Deep Learning has provided the foundation for machines to learn, perceive, reason and solve problems. NVIDIA's GPUs run AI algorithms, simulating human intelligence, and act as the brains of computers, robots and self-driving cars that can perceive and understand the world. Come, join our Deep Learning Architecture team, where you can help build real-time, cost-effective computing platforms driving our success in this exciting and rapidly growing field!

What you’ll be doing:

  • Develop innovative HW architectures to extend the state of the art in parallel computing performance, energy efficiency and programmability.

  • Build the mathematical frameworks required to reason about system availability and workload goodput at massive scales.

  • Reason about overall Deep Learning workload performance under various scheduling, parallelization, and resiliency strategies.

  • Conduct "what-if" studies on hardware configurations, infrastructure knobs, and workload strategies to identify optimal system-level trade-offs.

  • Work closely with wider architecture and product teams to guide the hardware/software roadmap using data-driven performance and reliability projections.

  • Build and refine high-level simulators in python to model the interaction between knobs that impact performance and resiliency.

What we need to see:

  • MS or PhD in a Computer Science, Computer Engineering, Electrical Engineering or equivalent experience.

  • 6+ years of relevant industry or research work experience.

  • Strong background in analytical and probabilistic modeling.

  • 2+ years of experience in parallel computing architectures, distributed systems, or interconnect fabrics.

  • A strong understanding of distributed deep learning workloads scheduling in large scale systems.

  • Proficiency in Python for building performance and reliability models.

Ways to stand out from the crowd:

  • Direct experience managing or troubleshooting large-scale jobs—you understand how jobs actually fail and recover in production.

  • Experience working with large-scale operational datasets (e.g., scheduler or hardware telemetry).

  • Knowledge of how orchestrators (e.g., Slurm, Kubernetes, PyTorch) manage workload recovery and job scheduling under failures.

  • Ability to simplify and communicate rich technical concepts with a non-technical audience.

Your base salary will be determined based on your location, experience, and the pay of employees in similar positions. The base salary range is 184,000 USD - 287,500 USD.

You will also be eligible for equity and benefits.

Applications for this job will be accepted at least until February 8, 2026.

This posting is for an existing vacancy.

NVIDIA uses AI tools in its recruiting processes.

NVIDIA is committed to fostering a diverse work environment and proud to be an equal opportunity employer. As we highly value diversity in our current and future employees, we do not discriminate (including in our hiring and promotion practices) on the basis of race, religion, color, national origin, gender, gender expression, sexual orientation, age, marital status, veteran status, disability status or any other characteristic protected by law.

SIMILAR OPPORTUNITIES

No similar jobs available at the moment.