LOG IN
SIGN UP
Tech Job Finder - Find Software, Technology Sales and Product Manager Jobs.
Sign In
OR continue with e-mail and password
E-mail address
Password
Don't have an account?
Reset password
Join Tech Job Finder
OR continue with e-mail and password
E-mail address
First name
Last name
Username
Password
Confirm Password
How did you hear about us?
By signing up, you agree to our Terms & Conditions and Privacy Policy.

AI Inference Engineer

at Perplexity AI

Back to all Data Science / AI / ML jobs
Perplexity AI logo
Industry not specified

AI Inference Engineer

at Perplexity AI

Mid LevelNo visa sponsorshipData Science/AI/ML

Posted 11 hours ago

No clicks

Compensation
Not specified

Currency: Not specified

City
Not specified
Country
Not specified

We are seeking an AI Inference Engineer to join our growing team. You will work on large-scale deployment of machine learning models for real-time inference using Python, Rust, C++, PyTorch, Triton, CUDA and Kubernetes. Responsibilities include developing APIs for AI inference used by internal and external customers, benchmarking and addressing bottlenecks in the inference stack, improving reliability and observability, and exploring LLM inference optimizations. Qualifications include experience with ML systems and DL frameworks (e.g., PyTorch, TensorFlow, ONNX) and familiarity with LLM architectures and CUDA GPU programming.

We are looking for an AI Inference engineer to join our growing team. Our current stack is Python, Rust, C++, PyTorch, Triton, CUDA, Kubernetes. You will have the opportunity to work on large-scale deployment of machine learning models for real-time inference.

Responsibilities

  • Develop APIs for AI inference that will be used by both internal and external customers

  • Benchmark and address bottlenecks throughout our inference stack

  • Improve the reliability and observability of our systems and respond to system outages

  • Explore novel research and implement LLM inference optimizations

Qualifications

  • Experience with ML systems and deep learning frameworks (e.g. PyTorch, TensorFlow, ONNX)

  • Familiarity with common LLM architectures and inference optimization techniques (e.g. continuous batching, quantization, etc.)

  • Understanding of GPU architectures or experience with GPU kernel programming using CUDA

Final offer amounts are determined by multiple factors, including, experience and expertise.

Equity: In addition to the base salary, equity may be part of the total compensation package.

AI Inference Engineer

at Perplexity AI

Back to all Data Science / AI / ML jobs
Perplexity AI logo
Industry not specified

AI Inference Engineer

at Perplexity AI

Mid LevelNo visa sponsorshipData Science/AI/ML

Posted 11 hours ago

No clicks

Compensation
Not specified

Currency: Not specified

City
Not specified
Country
Not specified

We are seeking an AI Inference Engineer to join our growing team. You will work on large-scale deployment of machine learning models for real-time inference using Python, Rust, C++, PyTorch, Triton, CUDA and Kubernetes. Responsibilities include developing APIs for AI inference used by internal and external customers, benchmarking and addressing bottlenecks in the inference stack, improving reliability and observability, and exploring LLM inference optimizations. Qualifications include experience with ML systems and DL frameworks (e.g., PyTorch, TensorFlow, ONNX) and familiarity with LLM architectures and CUDA GPU programming.

We are looking for an AI Inference engineer to join our growing team. Our current stack is Python, Rust, C++, PyTorch, Triton, CUDA, Kubernetes. You will have the opportunity to work on large-scale deployment of machine learning models for real-time inference.

Responsibilities

  • Develop APIs for AI inference that will be used by both internal and external customers

  • Benchmark and address bottlenecks throughout our inference stack

  • Improve the reliability and observability of our systems and respond to system outages

  • Explore novel research and implement LLM inference optimizations

Qualifications

  • Experience with ML systems and deep learning frameworks (e.g. PyTorch, TensorFlow, ONNX)

  • Familiarity with common LLM architectures and inference optimization techniques (e.g. continuous batching, quantization, etc.)

  • Understanding of GPU architectures or experience with GPU kernel programming using CUDA

Final offer amounts are determined by multiple factors, including, experience and expertise.

Equity: In addition to the base salary, equity may be part of the total compensation package.

SIMILAR OPPORTUNITIES

No similar jobs available at the moment.