Developer Technology Engineering Intern - 2026
at Nvidia
Posted 6 hours ago
No clicks
- Compensation
- Not specified
- City
- Beijing
- Country
- China
Currency: Not specified
Join NVIDIA's Compute Developer Technology (DevTech) team as an intern, researching and developing techniques to GPU-accelerate leading HPC, ML, and data‑processing applications on current and next-generation GPUs. You will work directly with application developers to optimize core parallel algorithms and data structures, focusing on training and inference optimization for large language models and contributing to frameworks such as Megatron, TRTLLM, SGLang, and vLLM. Collaborate with architecture, research, libraries, tools, and system software teams to influence next-generation architectures and programming models, and assess impact on performance and developer productivity. You will engage in deep optimization of high-performance operators, including CUDA optimization, and may contribute to products like cuDNN, cuBLAS, and CUTLASS.
NVIDIA is looking for a passionate talent to work in its Compute Developer Technology (DevTech) team. In this role, you will research and develop techniques to GPU-accelerate leading applications in high performance computing fields within machine and deep learning, scientific computing, and data processing, performing in-depth analysis and optimization to ensure the best possible performance on current- and next-generation GPU architectures.
What you will be doing:
Working directly with key application developers (especially LLM) to understand the current and future problems they are solving, creating and optimizing core parallel algorithms and data structures to provide the best solutions using GPUs, through both library development and direct contribution to the applications. This includes training and inference optimization for large language models, directly contributing to frameworks such as Megatron and TRTLLM, SGLang, vLLM...
Collaborating closely with the architecture, research, libraries, tools, and system software teams at NVIDIA to influence the design of next-generation architectures, software platforms, and programming models, including by investigating impact on application performance and developer productivity.
Engaging in deep optimization of high-performance operators, involving but not limited to CUDA deep optimization, instruction and compiler optimization. These optimizations will directly support customers or be integrated into products like cuDNN, cuBLAS, and CUTLASS...
What we need to see:
Pursuing MS or PhD from a leading University in an engineering or Computer Science related discipline.
Strong knowledge of C/C++ and/or Fortran.
Knowledge of software design, programming techniques, and algorithms.
Knowledge of LLM training/inference optimization, including but not limited to development and optimization experience in distributed training/inference, NCCL, NVSHMEM, IB, RoCE, etc.
Strong mathematical fundamentals, including linear algebra and numerical methods.
Experience with parallel programming, ideally CUDA C/C++ and OpenACC.
Good communication and organization skills, with a logical approach to problem solving, good time management, and task prioritization skills.

