LOG IN
SIGN UP
Tech Job Finder - Find Software, Technology Sales and Product Manager Jobs.
Sign In
OR continue with e-mail and password
E-mail address
Password
Don't have an account?
Reset password
Join Tech Job Finder
OR continue with e-mail and password
E-mail address
First name
Last name
Username
Password
Confirm Password
How did you hear about us?
By signing up, you agree to our Terms & Conditions and Privacy Policy.

Machine Learning Engineer, GeForce G-Assist

at Nvidia

Back to all Data Science / AI / ML jobs
N
Industry not specified

Machine Learning Engineer, GeForce G-Assist

at Nvidia

Tech LeadNo visa sponsorshipData Science/AI/ML

Posted 10 hours ago

No clicks

Compensation
$184,000 – $356,500 USD

Currency: $ (USD)

City
Not specified
Country
United States

Join NVIDIA to work on GeForce G-Assist, an on-device AI assistant that combines small language models, retrieval systems, and hybrid cloud capabilities to deliver context-aware assistance inside GeForce. You will evaluate and improve SLMs in production, focusing on accuracy, robustness, and conversational reliability, and mitigate conversation/context contamination such as state drift and prompt leakage. You'll work with SLM and VLM architectures for text and multimodal interactions, and design hybrid local-cloud inference pipelines while optimizing performance in performance-critical paths using llama.cpp and C/C++. You'll design and integrate retrieval-augmented generation (RAG) systems grounded in context and support agentic AI workflows with planning, tool use, and multi-step execution.

At NVIDIA, we’re building GeForce G-Assist — an on-device AI assistant that combines Small Language Models (SLMs), retrieval systems, and hybrid cloud capabilities to deliver responsive, context-aware assistance inside the GeForce ecosystem. We work closely across engineering and product teams to ensure G-Assist performs reliably in real-world scenarios.

What you'll be doing:

  • Together, we focus on how models behave in production, not just on benchmarks. Evaluate and improve Small Language Models used in GeForce G-Assist, with an emphasis on accuracy, robustness, and conversational reliability. Identify and mitigate conversation and context contamination, including state drift, prompt leakage, and retrieval cross-talk.

  • Work with SLM and VLM architectures to support text and multimodal interactions. Collaborate on hybrid architectures that combine local SLMs with cloud-based models. We value engineers who enjoy thinking across the full system—from model behavior to runtime performance.

  • Optimize local inference using llama.cpp, including quantization, memory usage, and performance tuning. Read, write, and optimize C/C++ code in performance-critical paths.

  • Design and integrate retrieval-augmented generation (RAG) systems that ground responses in system and user context. Support agentic AI workflows, enabling planning, tool use, and multi-step execution.

What we need to see:

  • 8+ years of validated experience in system software or a related field, with an M.S. or higher degree in Computer Science, Data Science, Engineering, or a related field (or equivalent experience). We’re looking for teammates who enjoy solving real problems, learning as they go, and collaborating in a tight-knit environment.

  • Strong ability to read and write C/C++ code in systems-level or performance-sensitive environments, along with proficiency in Python. Hands-on experience with llama.cpp or similar local inference frameworks.

  • Hands-on experience evaluating Small Language Models, including task-based and conversational testing, with an understanding of conversation dynamics, long-context behavior, and contamination challenges.

  • Knowledge of SLM and VLM architectures and their trade-offs, experience with retrieval technologies and language-model integration, and familiarity with agentic AI patterns such as tool use and planning.

Ways to stand out from the crowd:

  • Experience contributing to language or multimodal models that power user-facing products, features, or workflows.

  • A track record of collaborating with product, platform, or systems teams to balance model capability, performance, and user experience.

  • Demonstrated ability to translate user needs or feedback into measurable improvements in model behavior or system reliability.

We are widely considered to be one of the technology world's most desirable employers, and as a result, we have some of the most forward-thinking and hardworking people in the world working for us. If you're passionate, creative, and driven, we'd love to have you join the team. With competitive salaries and a generous benefits package, we are considered to be one of the technology world's most desirable employers. We have some of the most forward-thinking and hardworking people in the world working for us, and due to unprecedented growth, our exclusive engineering teams are rapidly growing. We want to hear from you if you're a creative and autonomous engineer with a real passion for technology.

Your base salary will be determined based on your location, experience, and the pay of employees in similar positions. The base salary range is 184,000 USD - 287,500 USD for Level 4, and 224,000 USD - 356,500 USD for Level 5.

You will also be eligible for equity and benefits.

Applications for this job will be accepted at least until January 31, 2026.

This posting is for an existing vacancy.

NVIDIA uses AI tools in its recruiting processes.

NVIDIA is committed to fostering a diverse work environment and proud to be an equal opportunity employer. As we highly value diversity in our current and future employees, we do not discriminate (including in our hiring and promotion practices) on the basis of race, religion, color, national origin, gender, gender expression, sexual orientation, age, marital status, veteran status, disability status or any other characteristic protected by law.

Machine Learning Engineer, GeForce G-Assist

at Nvidia

Back to all Data Science / AI / ML jobs
N
Industry not specified

Machine Learning Engineer, GeForce G-Assist

at Nvidia

Tech LeadNo visa sponsorshipData Science/AI/ML

Posted 10 hours ago

No clicks

Compensation
$184,000 – $356,500 USD

Currency: $ (USD)

City
Not specified
Country
United States

Join NVIDIA to work on GeForce G-Assist, an on-device AI assistant that combines small language models, retrieval systems, and hybrid cloud capabilities to deliver context-aware assistance inside GeForce. You will evaluate and improve SLMs in production, focusing on accuracy, robustness, and conversational reliability, and mitigate conversation/context contamination such as state drift and prompt leakage. You'll work with SLM and VLM architectures for text and multimodal interactions, and design hybrid local-cloud inference pipelines while optimizing performance in performance-critical paths using llama.cpp and C/C++. You'll design and integrate retrieval-augmented generation (RAG) systems grounded in context and support agentic AI workflows with planning, tool use, and multi-step execution.

At NVIDIA, we’re building GeForce G-Assist — an on-device AI assistant that combines Small Language Models (SLMs), retrieval systems, and hybrid cloud capabilities to deliver responsive, context-aware assistance inside the GeForce ecosystem. We work closely across engineering and product teams to ensure G-Assist performs reliably in real-world scenarios.

What you'll be doing:

  • Together, we focus on how models behave in production, not just on benchmarks. Evaluate and improve Small Language Models used in GeForce G-Assist, with an emphasis on accuracy, robustness, and conversational reliability. Identify and mitigate conversation and context contamination, including state drift, prompt leakage, and retrieval cross-talk.

  • Work with SLM and VLM architectures to support text and multimodal interactions. Collaborate on hybrid architectures that combine local SLMs with cloud-based models. We value engineers who enjoy thinking across the full system—from model behavior to runtime performance.

  • Optimize local inference using llama.cpp, including quantization, memory usage, and performance tuning. Read, write, and optimize C/C++ code in performance-critical paths.

  • Design and integrate retrieval-augmented generation (RAG) systems that ground responses in system and user context. Support agentic AI workflows, enabling planning, tool use, and multi-step execution.

What we need to see:

  • 8+ years of validated experience in system software or a related field, with an M.S. or higher degree in Computer Science, Data Science, Engineering, or a related field (or equivalent experience). We’re looking for teammates who enjoy solving real problems, learning as they go, and collaborating in a tight-knit environment.

  • Strong ability to read and write C/C++ code in systems-level or performance-sensitive environments, along with proficiency in Python. Hands-on experience with llama.cpp or similar local inference frameworks.

  • Hands-on experience evaluating Small Language Models, including task-based and conversational testing, with an understanding of conversation dynamics, long-context behavior, and contamination challenges.

  • Knowledge of SLM and VLM architectures and their trade-offs, experience with retrieval technologies and language-model integration, and familiarity with agentic AI patterns such as tool use and planning.

Ways to stand out from the crowd:

  • Experience contributing to language or multimodal models that power user-facing products, features, or workflows.

  • A track record of collaborating with product, platform, or systems teams to balance model capability, performance, and user experience.

  • Demonstrated ability to translate user needs or feedback into measurable improvements in model behavior or system reliability.

We are widely considered to be one of the technology world's most desirable employers, and as a result, we have some of the most forward-thinking and hardworking people in the world working for us. If you're passionate, creative, and driven, we'd love to have you join the team. With competitive salaries and a generous benefits package, we are considered to be one of the technology world's most desirable employers. We have some of the most forward-thinking and hardworking people in the world working for us, and due to unprecedented growth, our exclusive engineering teams are rapidly growing. We want to hear from you if you're a creative and autonomous engineer with a real passion for technology.

Your base salary will be determined based on your location, experience, and the pay of employees in similar positions. The base salary range is 184,000 USD - 287,500 USD for Level 4, and 224,000 USD - 356,500 USD for Level 5.

You will also be eligible for equity and benefits.

Applications for this job will be accepted at least until January 31, 2026.

This posting is for an existing vacancy.

NVIDIA uses AI tools in its recruiting processes.

NVIDIA is committed to fostering a diverse work environment and proud to be an equal opportunity employer. As we highly value diversity in our current and future employees, we do not discriminate (including in our hiring and promotion practices) on the basis of race, religion, color, national origin, gender, gender expression, sexual orientation, age, marital status, veteran status, disability status or any other characteristic protected by law.

SIMILAR OPPORTUNITIES

No similar jobs available at the moment.