LOG IN
SIGN UP
Tech Job Finder - Find Software, Technology Sales and Product Manager Jobs.
Sign In
OR continue with e-mail and password
E-mail address
Password
Don't have an account?
Reset password
Join Tech Job Finder
OR continue with e-mail and password
E-mail address
First name
Last name
Username
Password
Confirm Password
How did you hear about us?
By signing up, you agree to our Terms & Conditions and Privacy Policy.

Senior Machine Learning Applications and Compiler Engineer

at Nvidia

Back to all Data Science / AI / ML jobs
N
Industry not specified

Senior Machine Learning Applications and Compiler Engineer

at Nvidia

Mid LevelNo visa sponsorshipData Science/AI/ML

Posted 11 hours ago

No clicks

Compensation
$135,000 – $220,000 CAD

Currency: $ (CAD)

City
Toronto
Country
Canada

NVIDIA is seeking a Senior Machine Learning Applications and Compiler Engineer to develop algorithms and optimizations for the inference and compiler stack. You will design end-to-end mappings of large-scale inference workloads onto NVIDIA systems, and extend NVIDIA's software ecosystem with libraries, tooling, and interfaces for seamless model deployment. You will benchmark, profile, and monitor performance, collaborating with hardware architects to influence future architectures and co-design features for higher performance. You will prototype new compilation and runtime techniques for graph-based DL workloads and publish results at top ML, compiler, and computer architecture venues.

We are now looking for a Senior Machine Learning Applications and Compiler Engineer!

NVIDIA is seeking engineers to develop algorithms and optimizations for our inference and compiler stack. You will work at the intersection of large-scale systems, compilers, and deep learning, crafting how neural network workloads map onto future NVIDIA platforms. This is your chance to be part of something outstandingly innovative!

What you’ll be doing:

  • Build, develop, and maintain high-performance runtime and compiler components, focusing on end-to-end inference optimization.

  • Define and implement mappings of large-scale inference workloads onto NVIDIA’s systems.

  • Extend and integrate with NVIDIA’s SW ecosystem, contributing to libraries, tooling, and interfaces that enable seamless deployment of models across platforms.

  • Benchmark, profile, and monitor key performance and efficiency metrics to ensure the compiler generates efficient mappings of neural network graphs to our inference hardware.

  • Collaborate closely with hardware architects and design teams to feedback software observations, influence future architectures, and codesign features that unlock new performance and efficiency points.

  • Prototype and evaluate new compilation and runtime techniques, including graph transformations, scheduling strategies, and memory/layout optimizations tailored to spatial processors.

  • Publish and present technical work on novel compilation approaches for inference and related spatial accelerators at top tier ML, compiler, and computer architecture venues.

What we need to see:

  • MS or PhD in Computer Science, Electrical/Computer Engineering, or related field, or equivalent experience, with 5 years of relevant experience.

  • Strong software engineering background with proficiency in systems level programming (e.g., C/C++ and/or Rust) and solid CS fundamentals in data structures, algorithms, and concurrency.

  • Hands on experience with compiler or runtime development, including IR design, optimization passes, or code generation.

  • Experience with LLVM and/or MLIR, including building custom passes, dialects, or integrations.

  • Familiarity with deep learning frameworks such as TensorFlow and PyTorch, and experience working with portable graph formats such as ONNX.

  • Solid understanding of parallel and heterogeneous compute architectures, such as GPUs, spatial accelerators, or other domain specific processors.

  • Strong analytical and debugging skills, with experience using profiling, tracing, and benchmarking tools to drive performance improvements.

  • Excellent communication and collaboration skills, with the ability to work across hardware, systems, and software teams.

  • Ideal candidates will have direct experience with MLIR based compilers or other multilevel IR stacks, especially in the context of graph based deep learning workloads.

Ways to stand out from the crowd:

  • Prior work on spatial or dataflow architectures, including static scheduling, pipeline parallelism, or tensor parallelism at scale.

  • Contributions to opensource ML frameworks, compilers, or runtime systems, particularly in areas related to performance or scalability.

  • Demonstrated research impact, such as publications or presentations at conferences like PLDI, CGO, ASPLOS, ISCA, MICRO, MLSys, NeurIPS, or similar.

  • Experience with large-scale AI distributed inference or training systems, including performance modeling and capacity planning for multi rack deployments.

#LI-Hybrid

Your base salary will be determined based on your location, experience, and the pay of employees in similar positions. The base salary range is 135,000 CAD - 185,000 CAD for Level 3, and 170,000 CAD - 220,000 CAD for Level 4.

You will also be eligible for equity and benefits.

Applications for this job will be accepted at least until February 8, 2026.

This posting is for an existing vacancy.

NVIDIA uses AI tools in its recruiting processes.

Senior Machine Learning Applications and Compiler Engineer

at Nvidia

Back to all Data Science / AI / ML jobs
N
Industry not specified

Senior Machine Learning Applications and Compiler Engineer

at Nvidia

Mid LevelNo visa sponsorshipData Science/AI/ML

Posted 11 hours ago

No clicks

Compensation
$135,000 – $220,000 CAD

Currency: $ (CAD)

City
Toronto
Country
Canada

NVIDIA is seeking a Senior Machine Learning Applications and Compiler Engineer to develop algorithms and optimizations for the inference and compiler stack. You will design end-to-end mappings of large-scale inference workloads onto NVIDIA systems, and extend NVIDIA's software ecosystem with libraries, tooling, and interfaces for seamless model deployment. You will benchmark, profile, and monitor performance, collaborating with hardware architects to influence future architectures and co-design features for higher performance. You will prototype new compilation and runtime techniques for graph-based DL workloads and publish results at top ML, compiler, and computer architecture venues.

We are now looking for a Senior Machine Learning Applications and Compiler Engineer!

NVIDIA is seeking engineers to develop algorithms and optimizations for our inference and compiler stack. You will work at the intersection of large-scale systems, compilers, and deep learning, crafting how neural network workloads map onto future NVIDIA platforms. This is your chance to be part of something outstandingly innovative!

What you’ll be doing:

  • Build, develop, and maintain high-performance runtime and compiler components, focusing on end-to-end inference optimization.

  • Define and implement mappings of large-scale inference workloads onto NVIDIA’s systems.

  • Extend and integrate with NVIDIA’s SW ecosystem, contributing to libraries, tooling, and interfaces that enable seamless deployment of models across platforms.

  • Benchmark, profile, and monitor key performance and efficiency metrics to ensure the compiler generates efficient mappings of neural network graphs to our inference hardware.

  • Collaborate closely with hardware architects and design teams to feedback software observations, influence future architectures, and codesign features that unlock new performance and efficiency points.

  • Prototype and evaluate new compilation and runtime techniques, including graph transformations, scheduling strategies, and memory/layout optimizations tailored to spatial processors.

  • Publish and present technical work on novel compilation approaches for inference and related spatial accelerators at top tier ML, compiler, and computer architecture venues.

What we need to see:

  • MS or PhD in Computer Science, Electrical/Computer Engineering, or related field, or equivalent experience, with 5 years of relevant experience.

  • Strong software engineering background with proficiency in systems level programming (e.g., C/C++ and/or Rust) and solid CS fundamentals in data structures, algorithms, and concurrency.

  • Hands on experience with compiler or runtime development, including IR design, optimization passes, or code generation.

  • Experience with LLVM and/or MLIR, including building custom passes, dialects, or integrations.

  • Familiarity with deep learning frameworks such as TensorFlow and PyTorch, and experience working with portable graph formats such as ONNX.

  • Solid understanding of parallel and heterogeneous compute architectures, such as GPUs, spatial accelerators, or other domain specific processors.

  • Strong analytical and debugging skills, with experience using profiling, tracing, and benchmarking tools to drive performance improvements.

  • Excellent communication and collaboration skills, with the ability to work across hardware, systems, and software teams.

  • Ideal candidates will have direct experience with MLIR based compilers or other multilevel IR stacks, especially in the context of graph based deep learning workloads.

Ways to stand out from the crowd:

  • Prior work on spatial or dataflow architectures, including static scheduling, pipeline parallelism, or tensor parallelism at scale.

  • Contributions to opensource ML frameworks, compilers, or runtime systems, particularly in areas related to performance or scalability.

  • Demonstrated research impact, such as publications or presentations at conferences like PLDI, CGO, ASPLOS, ISCA, MICRO, MLSys, NeurIPS, or similar.

  • Experience with large-scale AI distributed inference or training systems, including performance modeling and capacity planning for multi rack deployments.

#LI-Hybrid

Your base salary will be determined based on your location, experience, and the pay of employees in similar positions. The base salary range is 135,000 CAD - 185,000 CAD for Level 3, and 170,000 CAD - 220,000 CAD for Level 4.

You will also be eligible for equity and benefits.

Applications for this job will be accepted at least until February 8, 2026.

This posting is for an existing vacancy.

NVIDIA uses AI tools in its recruiting processes.

SIMILAR OPPORTUNITIES

No similar jobs available at the moment.