Developer Technology Engineer - AI
at Nvidia
Posted 16 hours ago
No clicks
- Compensation
- Not specified
- City
- Beijing
- Country
- China
Currency: Not specified
NVIDIA is seeking a skilled Developer Technology Engineer to join its Compute DevTech team to research and develop GPU-accelerated techniques for HPC, ML, and data processing workloads. The role focuses on training and inference optimization for large language models, contributing to frameworks such as Megatron, TRTLLM, SGLang, and vLLM, and optimizing core parallel algorithms and data structures for GPUs. You will collaborate with architecture, research, libraries, tools, and system software teams to influence next-generation architectures and programming models, and perform deep optimization of high-performance operators across CUDA, cuDNN, cuBLAS, and CUTLASS, with some travel for conferences and on-site developer visits.
NVIDIA is looking for a passionate, world-class computer scientist to work in its Compute Developer Technology (DevTech) team. In this role, you will research and develop techniques to GPU-accelerate leading applications in high performance computing fields within machine and deep learning, scientific computing, and data processing, performing in-depth analysis and optimization to ensure the best possible performance on current- and next-generation GPU architectures.
What you will be doing:
Working directly with key application developers (especially LLM) to understand the current and future problems they are solving, creating and optimizing core parallel algorithms and data structures to provide the best solutions using GPUs, through both library development and direct contribution to the applications. This includes training and inference optimization for large language models, directly contributing to frameworks such as Megatron and TRTLLM, SGLang, vLLM...
Collaborating closely with the architecture, research, libraries, tools, and system software teams at NVIDIA to influence the design of next-generation architectures, software platforms, and programming models, including by investigating impact on application performance and developer productivity.
Engaging in deep optimization of high-performance operators, involving but not limited to CUDA deep optimization, instruction and compiler optimization. These optimizations will directly support customers or be integrated into products like cuDNN, cuBLAS, and CUTLASS...
Some travel is required for conferences and for on-site visits with developers.
What we need to see:
A degree from university in an engineering or computer science related discipline (BS; MS or PhD preferred).
2+ working experience is required.
Strong knowledge of C/C++ and/or Fortran.
Deep knowledge of software design, programming techniques, and algorithms.
Expert knowledge of LLM training/inference optimization, including but not limited to development and optimization experience in distributed training/inference, NCCL, NVSHMEM, IB, RoCE, etc.
Strong mathematical fundamentals, including linear algebra and numerical methods.
Experience with parallel programming, ideally CUDA C/C++ and OpenACC.
Good communication and organization skills, with a logical approach to problem solving, good time management, and task prioritization skills.

