LOG IN
SIGN UP
Tech Job Finder - Find Software, Technology Sales and Product Manager Jobs.
Sign In
OR continue with e-mail and password
E-mail address
Password
Don't have an account?
Reset password
Join Tech Job Finder
OR continue with e-mail and password
E-mail address
First name
Last name
Username
Password
Confirm Password
How did you hear about us?
By signing up, you agree to our Terms & Conditions and Privacy Policy.

HPC and AI Software Architect

at Nvidia

Back to all Data Science / AI / ML jobs
N
Industry not specified

HPC and AI Software Architect

at Nvidia

Mid LevelNo visa sponsorshipData Science/AI/ML

Posted 7 hours ago

No clicks

Compensation
Not specified

Currency: Not specified

City
Zurich
Country
Switzerland

Seeking an HPC and AI Inference Software Architect to shape scalable AI infrastructure, focusing on distributed training, real-time inference, and interconnect optimization across large-scale systems. You will design and prototype scalable software to optimize AI training and inference throughput, latency, and memory efficiency, and evaluate enhancements to communication libraries such as NCCL, UCX, and UCC. You will collaborate with AI framework teams (TensorFlow, PyTorch, JAX) to improve integration, performance, and reliability of communication backends, and co-design hardware features to accelerate data movement for inference and model serving. You will contribute to runtime systems and AI-specific protocol layers as part of NVIDIA's world-class HPC and AI team.

NVIDIA has been redefining computer graphics, PC gaming, and accelerated computing for more than 25 years. Today, we lead in artificial intelligence, driving advances in natural language processing, computer vision, autonomous systems, and scientific research. We are looking for a forward-thinking HPC and AI Inference Software Architect to help shape the future of scalable AI infrastructure—focusing on distributed training, real-time inference, and communication optimization across large-scale systems. Join our world-class team of researchers and engineers building next-generation software and hardware systems that power the most demanding AI workloads on the planet.

What you will be doing:

  • Design and prototype scalable software systems that optimize distributed AI training and inference—focusing on throughput, latency, and memory efficiency.

  • Develop and evaluate enhancements to communication libraries such as NCCL, UCX, and UCC, tailored to the unique demands of deep learning workloads.

  • Collaborate with AI framework teams (e.g., TensorFlow, PyTorch, JAX) to improve integration, performance, and reliability of communication backends.

  • Co-design hardware features (e.g., in GPUs, DPUs, or interconnects) that accelerate data movement and enable new capabilities for inference and model serving.

  • Contribute to the evolution of runtime systems, communication libraries, and AI-specific protocol layers.

What we need to see:

  • Ph.D. or equivalent industry experience in computer science, computer engineering, or a closely related field.

  • 2+ years of experience in systems programming, parallel or distributed computing, or high-performance data movement.

  • Strong programming background in C++, Python, and ideally CUDA or other GPU programming models.

  • Practical experience with AI frameworks (e.g., PyTorch, TensorFlow) and familiarity with how they use communication libraries under the hood.

  • Experience in designing or optimizing software for high-throughput, low-latency systems.

  • Strong collaboration skills in a multi-national, interdisciplinary environment.

Ways to stand out from the crowd:

  • Expertise with NCCL, Gloo, UCX, or similar libraries used in distributed AI workloads.

  • Background in networking and communication protocols, RDMA, collective communications, or accelerator-aware networking.

  • Deep understanding of large model training, inference serving at scale, and associated communication bottlenecks.

  • Knowledge of quantization, tensor/activation fusion, or memory optimization for inference.

  • Familiarity with infrastructure for deployment of LLMs or transformer-based models, including sharding, pipelining, or hybrid parallelism.

At NVIDIA, you’ll work alongside some of the brightest minds in the industry, pushing the boundaries of what’s possible in AI and high-performance computing. If you're passionate about distributed systems, AI inference, and solving problems at scale, we want to hear from you.

NVIDIA is at the forefront of breakthroughs in Artificial Intelligence, High-Performance Computing, and Visualization. Our teams are composed of driven, innovative professionals dedicated to pushing the boundaries of technology. We offer highly competitive salaries, an extensive benefits package, and a work environment that promotes diversity, inclusion, and flexibility. As an equal opportunity employer, we are committed to fostering a supportive and empowering workplace for all.

Your base salary will be determined based on your location, experience, and the pay of employees in similar positions. For Poland: The base salary range is 176,250 PLN - 305,500 PLN for Level 2, and 221,250 PLN - 383,500 PLN for Level 3.

HPC and AI Software Architect

at Nvidia

Back to all Data Science / AI / ML jobs
N
Industry not specified

HPC and AI Software Architect

at Nvidia

Mid LevelNo visa sponsorshipData Science/AI/ML

Posted 7 hours ago

No clicks

Compensation
Not specified

Currency: Not specified

City
Zurich
Country
Switzerland

Seeking an HPC and AI Inference Software Architect to shape scalable AI infrastructure, focusing on distributed training, real-time inference, and interconnect optimization across large-scale systems. You will design and prototype scalable software to optimize AI training and inference throughput, latency, and memory efficiency, and evaluate enhancements to communication libraries such as NCCL, UCX, and UCC. You will collaborate with AI framework teams (TensorFlow, PyTorch, JAX) to improve integration, performance, and reliability of communication backends, and co-design hardware features to accelerate data movement for inference and model serving. You will contribute to runtime systems and AI-specific protocol layers as part of NVIDIA's world-class HPC and AI team.

NVIDIA has been redefining computer graphics, PC gaming, and accelerated computing for more than 25 years. Today, we lead in artificial intelligence, driving advances in natural language processing, computer vision, autonomous systems, and scientific research. We are looking for a forward-thinking HPC and AI Inference Software Architect to help shape the future of scalable AI infrastructure—focusing on distributed training, real-time inference, and communication optimization across large-scale systems. Join our world-class team of researchers and engineers building next-generation software and hardware systems that power the most demanding AI workloads on the planet.

What you will be doing:

  • Design and prototype scalable software systems that optimize distributed AI training and inference—focusing on throughput, latency, and memory efficiency.

  • Develop and evaluate enhancements to communication libraries such as NCCL, UCX, and UCC, tailored to the unique demands of deep learning workloads.

  • Collaborate with AI framework teams (e.g., TensorFlow, PyTorch, JAX) to improve integration, performance, and reliability of communication backends.

  • Co-design hardware features (e.g., in GPUs, DPUs, or interconnects) that accelerate data movement and enable new capabilities for inference and model serving.

  • Contribute to the evolution of runtime systems, communication libraries, and AI-specific protocol layers.

What we need to see:

  • Ph.D. or equivalent industry experience in computer science, computer engineering, or a closely related field.

  • 2+ years of experience in systems programming, parallel or distributed computing, or high-performance data movement.

  • Strong programming background in C++, Python, and ideally CUDA or other GPU programming models.

  • Practical experience with AI frameworks (e.g., PyTorch, TensorFlow) and familiarity with how they use communication libraries under the hood.

  • Experience in designing or optimizing software for high-throughput, low-latency systems.

  • Strong collaboration skills in a multi-national, interdisciplinary environment.

Ways to stand out from the crowd:

  • Expertise with NCCL, Gloo, UCX, or similar libraries used in distributed AI workloads.

  • Background in networking and communication protocols, RDMA, collective communications, or accelerator-aware networking.

  • Deep understanding of large model training, inference serving at scale, and associated communication bottlenecks.

  • Knowledge of quantization, tensor/activation fusion, or memory optimization for inference.

  • Familiarity with infrastructure for deployment of LLMs or transformer-based models, including sharding, pipelining, or hybrid parallelism.

At NVIDIA, you’ll work alongside some of the brightest minds in the industry, pushing the boundaries of what’s possible in AI and high-performance computing. If you're passionate about distributed systems, AI inference, and solving problems at scale, we want to hear from you.

NVIDIA is at the forefront of breakthroughs in Artificial Intelligence, High-Performance Computing, and Visualization. Our teams are composed of driven, innovative professionals dedicated to pushing the boundaries of technology. We offer highly competitive salaries, an extensive benefits package, and a work environment that promotes diversity, inclusion, and flexibility. As an equal opportunity employer, we are committed to fostering a supportive and empowering workplace for all.

Your base salary will be determined based on your location, experience, and the pay of employees in similar positions. For Poland: The base salary range is 176,250 PLN - 305,500 PLN for Level 2, and 221,250 PLN - 383,500 PLN for Level 3.

SIMILAR OPPORTUNITIES

No similar jobs available at the moment.