LOG IN
SIGN UP
Tech Job Finder - Find Software, Technology Sales and Product Manager Jobs.
Sign In
OR continue with e-mail and password
E-mail address
Password
Don't have an account?
Reset password
Join Tech Job Finder
OR continue with e-mail and password
E-mail address
First name
Last name
Username
Password
Confirm Password
How did you hear about us?
By signing up, you agree to our Terms & Conditions and Privacy Policy.

AI Inference Engineer

at Perplexity AI

Back to all Data Science / AI / ML jobs
Perplexity AI logo
Industry not specified

AI Inference Engineer

at Perplexity AI

Mid LevelNo visa sponsorshipData Science/AI/ML

Posted 12 hours ago

No clicks

Compensation
Not specified

Currency: Not specified

City
Not specified
Country
Not specified

Join our team as an AI Inference Engineer working on real-time, large-scale deployment of ML models. You'll work with Python, Rust, C++, PyTorch, Triton, CUDA, and Kubernetes to build and optimize inference APIs used by internal and external customers. Responsibilities include benchmarking bottlenecks, improving reliability and observability, and researching LLM inference optimizations for performance and latency.

We are looking for an AI Inference engineer to join our growing team. Our current stack is Python, Rust, C++, PyTorch, Triton, CUDA, Kubernetes. You will have the opportunity to work on large-scale deployment of machine learning models for real-time inference.

Responsibilities

  • Develop APIs for AI inference that will be used by both internal and external customers

  • Benchmark and address bottlenecks throughout our inference stack

  • Improve the reliability and observability of our systems and respond to system outages

  • Explore novel research and implement LLM inference optimizations

Qualifications

  • Experience with ML systems and deep learning frameworks (e.g. PyTorch, TensorFlow, ONNX)

  • Familiarity with common LLM architectures and inference optimization techniques (e.g. continuous batching, quantization, etc.)

  • Understanding of GPU architectures or experience with GPU kernel programming using CUDA

AI Inference Engineer

at Perplexity AI

Back to all Data Science / AI / ML jobs
Perplexity AI logo
Industry not specified

AI Inference Engineer

at Perplexity AI

Mid LevelNo visa sponsorshipData Science/AI/ML

Posted 12 hours ago

No clicks

Compensation
Not specified

Currency: Not specified

City
Not specified
Country
Not specified

Join our team as an AI Inference Engineer working on real-time, large-scale deployment of ML models. You'll work with Python, Rust, C++, PyTorch, Triton, CUDA, and Kubernetes to build and optimize inference APIs used by internal and external customers. Responsibilities include benchmarking bottlenecks, improving reliability and observability, and researching LLM inference optimizations for performance and latency.

We are looking for an AI Inference engineer to join our growing team. Our current stack is Python, Rust, C++, PyTorch, Triton, CUDA, Kubernetes. You will have the opportunity to work on large-scale deployment of machine learning models for real-time inference.

Responsibilities

  • Develop APIs for AI inference that will be used by both internal and external customers

  • Benchmark and address bottlenecks throughout our inference stack

  • Improve the reliability and observability of our systems and respond to system outages

  • Explore novel research and implement LLM inference optimizations

Qualifications

  • Experience with ML systems and deep learning frameworks (e.g. PyTorch, TensorFlow, ONNX)

  • Familiarity with common LLM architectures and inference optimization techniques (e.g. continuous batching, quantization, etc.)

  • Understanding of GPU architectures or experience with GPU kernel programming using CUDA

SIMILAR OPPORTUNITIES

No similar jobs available at the moment.