LOG IN
SIGN UP
Tech Job Finder - Find Software, Technology Sales and Product Manager Jobs.
Sign In
OR continue with e-mail and password
E-mail address
Password
Don't have an account?
Reset password
Join Tech Job Finder
OR continue with e-mail and password
E-mail address
First name
Last name
Username
Password
Confirm Password
How did you hear about us?
By signing up, you agree to our Terms & Conditions and Privacy Policy.

Principal Software Engineer – Large-Scale LLM Memory and Storage Systems

at Nvidia

Back to all C/C++ jobs
N
Industry not specified

Principal Software Engineer – Large-Scale LLM Memory and Storage Systems

at Nvidia

Tech LeadNo visa sponsorshipC/C++/C#

Posted 8 hours ago

No clicks

Compensation
$272,000 – $431,250 USD

Currency: $ (USD)

City
Santa Clara
Country
United States

Lead the memory and storage architecture for NVIDIA Dynamo to enable high-throughput, low-latency large-scale LLM inference across disaggregated clusters. Define and drive the roadmap for a unified memory layer spanning GPU memory, pinned host memory, RDMA-accessible memory, SSD tiers, and remote storage. Architect deep integrations with LLH serving engines (e.g., vLLM, SGLang, TensorRT-LLM) focusing on KV-cache offload, reuse, and remote sharing, while co-designing interfaces for multi-tier KV-cache storage. Mentor engineers, partner with GPU/ Networking/ Platform teams to leverage RDMA/NVLink/GPUDirect and represent the team in reviews, open sources, and customer deep dives.

NVIDIA Dynamo is a high-throughput, low-latency inference framework for serving generative AI and reasoning models across multi-node distributed environments. Built in Rust for performance and Python for extensibility, Dynamo orchestrates GPU shards, routes requests, and manages shared KV cache across heterogeneous clusters so that many accelerators feel like a single system at datacenter scale. As large language models rapidly outgrow the memory and compute budget of any single GPU, this platform enables efficient, resilient deployment of cutting-edge LLM workloads.


We are seeking a Principal Systems Engineer to define the vision and roadmap for memory management of large-scale LLM and storage systems.


What you'll be doing:

  • Design and evolve a unified memory layer that spans GPU memory, pinned host memory, RDMA-accessible memory, SSD tiers, and remote file/object/cloud storage to support large-scale LLM inference.

  • Architect and implement deep integrations with leading LLM serving engines (such as vLLM, SGLang, TensorRT-LLM), with a focus on KV-cache offload, reuse, and remote sharing across heterogeneous and disaggregated clusters.

  • Co-design interfaces and protocols that enable disaggregated prefill, peer-to-peer KV-cache sharing, and multi-tier KV-cache storage (GPU, CPU, local disk, and remote memory) for high-throughput, low-latency inference.

  • Partner closely with GPU architecture, networking, and platform teams to exploit GPUDirect, RDMA, NVLink, and similar technologies for low-latency KV-cache access and sharing across heterogeneous accelerators and memory pools.

  • Mentor senior and junior engineers, set technical direction for memory and storage subsystems, and represent the team in internal reviews and external forums (open source, conferences, and customer-facing technical deep dives).

What we need to see:

  • Masters or PhD or equivalent experience

  • 15+ years of experience building large-scale distributed systems, high-performance storage, or ML systems infrastructure in C/C++ and Python, with a track record of delivering production services.

  • Deep understanding of memory hierarchies (GPU HBM, host DRAM, SSD, and remote/object storage) and experience designing systems that span multiple tiers for performance and cost efficiency.

  • Distributed caching or key-value systems, especially designs optimized for low latency and high concurrency.

  • Hands-on experience with networked I/O and RDMA/NVMe-oF/NVLink-style technologies, and familiarity with concepts like disaggregated and aggregated deployments for AI clusters.

  • Strong skills in profiling and optimizing systems across CPU, GPU, memory, and network, using metrics to drive architectural decisions and validate improvements in TTFT and throughput.

  • Excellent communication skills and prior experience leading cross-functional efforts with research, product, and customer teams.

Ways to stand out from the crowd:

  • Prior contributions to open-source LLM serving or systems projects focused on KV-cache optimization, compression, streaming, or reuse.

  • Experience designing unified memory or storage layers that expose a single logical KV or object model across GPU, host, SSD, and cloud tiers, especially in enterprise or hyperscale environments.

  • Publications or patents in areas such as LLM systems, memory-disaggregated architectures, RDMA/NVLink-based data planes, or KV-cache/CDN-like systems for ML.

With highly competitive salaries and a comprehensive benefits package, NVIDIA is widely considered to be one of the technology world's most desirable employers. We have some of the most forward-thinking and hardworking people in the world working for us and, due to outstanding growth, our special engineering teams are growing fast. If you're a creative and autonomous engineer with a genuine passion for technology, we want to hear from you!

Your base salary will be determined based on your location, experience, and the pay of employees in similar positions. The base salary range is 272,000 USD - 431,250 USD.

You will also be eligible for equity and benefits.

Applications for this job will be accepted at least until January 13, 2026.

This posting is for an existing vacancy.

NVIDIA uses AI tools in its recruiting processes.

NVIDIA is committed to fostering a diverse work environment and proud to be an equal opportunity employer. As we highly value diversity in our current and future employees, we do not discriminate (including in our hiring and promotion practices) on the basis of race, religion, color, national origin, gender, gender expression, sexual orientation, age, marital status, veteran status, disability status or any other characteristic protected by law.

Principal Software Engineer – Large-Scale LLM Memory and Storage Systems

at Nvidia

Back to all C/C++ jobs
N
Industry not specified

Principal Software Engineer – Large-Scale LLM Memory and Storage Systems

at Nvidia

Tech LeadNo visa sponsorshipC/C++/C#

Posted 8 hours ago

No clicks

Compensation
$272,000 – $431,250 USD

Currency: $ (USD)

City
Santa Clara
Country
United States

Lead the memory and storage architecture for NVIDIA Dynamo to enable high-throughput, low-latency large-scale LLM inference across disaggregated clusters. Define and drive the roadmap for a unified memory layer spanning GPU memory, pinned host memory, RDMA-accessible memory, SSD tiers, and remote storage. Architect deep integrations with LLH serving engines (e.g., vLLM, SGLang, TensorRT-LLM) focusing on KV-cache offload, reuse, and remote sharing, while co-designing interfaces for multi-tier KV-cache storage. Mentor engineers, partner with GPU/ Networking/ Platform teams to leverage RDMA/NVLink/GPUDirect and represent the team in reviews, open sources, and customer deep dives.

NVIDIA Dynamo is a high-throughput, low-latency inference framework for serving generative AI and reasoning models across multi-node distributed environments. Built in Rust for performance and Python for extensibility, Dynamo orchestrates GPU shards, routes requests, and manages shared KV cache across heterogeneous clusters so that many accelerators feel like a single system at datacenter scale. As large language models rapidly outgrow the memory and compute budget of any single GPU, this platform enables efficient, resilient deployment of cutting-edge LLM workloads.


We are seeking a Principal Systems Engineer to define the vision and roadmap for memory management of large-scale LLM and storage systems.


What you'll be doing:

  • Design and evolve a unified memory layer that spans GPU memory, pinned host memory, RDMA-accessible memory, SSD tiers, and remote file/object/cloud storage to support large-scale LLM inference.

  • Architect and implement deep integrations with leading LLM serving engines (such as vLLM, SGLang, TensorRT-LLM), with a focus on KV-cache offload, reuse, and remote sharing across heterogeneous and disaggregated clusters.

  • Co-design interfaces and protocols that enable disaggregated prefill, peer-to-peer KV-cache sharing, and multi-tier KV-cache storage (GPU, CPU, local disk, and remote memory) for high-throughput, low-latency inference.

  • Partner closely with GPU architecture, networking, and platform teams to exploit GPUDirect, RDMA, NVLink, and similar technologies for low-latency KV-cache access and sharing across heterogeneous accelerators and memory pools.

  • Mentor senior and junior engineers, set technical direction for memory and storage subsystems, and represent the team in internal reviews and external forums (open source, conferences, and customer-facing technical deep dives).

What we need to see:

  • Masters or PhD or equivalent experience

  • 15+ years of experience building large-scale distributed systems, high-performance storage, or ML systems infrastructure in C/C++ and Python, with a track record of delivering production services.

  • Deep understanding of memory hierarchies (GPU HBM, host DRAM, SSD, and remote/object storage) and experience designing systems that span multiple tiers for performance and cost efficiency.

  • Distributed caching or key-value systems, especially designs optimized for low latency and high concurrency.

  • Hands-on experience with networked I/O and RDMA/NVMe-oF/NVLink-style technologies, and familiarity with concepts like disaggregated and aggregated deployments for AI clusters.

  • Strong skills in profiling and optimizing systems across CPU, GPU, memory, and network, using metrics to drive architectural decisions and validate improvements in TTFT and throughput.

  • Excellent communication skills and prior experience leading cross-functional efforts with research, product, and customer teams.

Ways to stand out from the crowd:

  • Prior contributions to open-source LLM serving or systems projects focused on KV-cache optimization, compression, streaming, or reuse.

  • Experience designing unified memory or storage layers that expose a single logical KV or object model across GPU, host, SSD, and cloud tiers, especially in enterprise or hyperscale environments.

  • Publications or patents in areas such as LLM systems, memory-disaggregated architectures, RDMA/NVLink-based data planes, or KV-cache/CDN-like systems for ML.

With highly competitive salaries and a comprehensive benefits package, NVIDIA is widely considered to be one of the technology world's most desirable employers. We have some of the most forward-thinking and hardworking people in the world working for us and, due to outstanding growth, our special engineering teams are growing fast. If you're a creative and autonomous engineer with a genuine passion for technology, we want to hear from you!

Your base salary will be determined based on your location, experience, and the pay of employees in similar positions. The base salary range is 272,000 USD - 431,250 USD.

You will also be eligible for equity and benefits.

Applications for this job will be accepted at least until January 13, 2026.

This posting is for an existing vacancy.

NVIDIA uses AI tools in its recruiting processes.

NVIDIA is committed to fostering a diverse work environment and proud to be an equal opportunity employer. As we highly value diversity in our current and future employees, we do not discriminate (including in our hiring and promotion practices) on the basis of race, religion, color, national origin, gender, gender expression, sexual orientation, age, marital status, veteran status, disability status or any other characteristic protected by law.

SIMILAR OPPORTUNITIES

No similar jobs available at the moment.