
AI Inference Engineer
at Perplexity AI
Posted 11 hours ago
No clicks
- Compensation
- Not specified
- City
- Not specified
- Country
- Not specified
Currency: Not specified
We are seeking an AI Inference Engineer to join our growing team. You will work on large-scale deployment of machine learning models for real-time inference using Python, Rust, C++, PyTorch, Triton, CUDA and Kubernetes. Responsibilities include developing APIs for AI inference used by internal and external customers, benchmarking and addressing bottlenecks in the inference stack, improving reliability and observability, and exploring LLM inference optimizations. Qualifications include experience with ML systems and DL frameworks (e.g., PyTorch, TensorFlow, ONNX) and familiarity with LLM architectures and CUDA GPU programming.
We are looking for an AI Inference engineer to join our growing team. Our current stack is Python, Rust, C++, PyTorch, Triton, CUDA, Kubernetes. You will have the opportunity to work on large-scale deployment of machine learning models for real-time inference.
Responsibilities
Develop APIs for AI inference that will be used by both internal and external customers
Benchmark and address bottlenecks throughout our inference stack
Improve the reliability and observability of our systems and respond to system outages
Explore novel research and implement LLM inference optimizations
Qualifications
Experience with ML systems and deep learning frameworks (e.g. PyTorch, TensorFlow, ONNX)
Familiarity with common LLM architectures and inference optimization techniques (e.g. continuous batching, quantization, etc.)
Understanding of GPU architectures or experience with GPU kernel programming using CUDA
Final offer amounts are determined by multiple factors, including, experience and expertise.
Equity: In addition to the base salary, equity may be part of the total compensation package.

