LOG IN
SIGN UP
Tech Job Finder - Find Software, Technology Sales and Product Manager Jobs.
Sign In
OR continue with e-mail and password
E-mail address
Password
Don't have an account?
Reset password
Join Tech Job Finder
OR continue with e-mail and password
E-mail address
First name
Last name
Username
Password
Confirm Password
How did you hear about us?
By signing up, you agree to our Terms & Conditions and Privacy Policy.

Data Scientist, Evals

at Perplexity AI

Back to all Data Science / AI / ML jobs
Perplexity AI logo
Industry not specified

Data Scientist, Evals

at Perplexity AI

Mid LevelNo visa sponsorshipData Science/AI/ML

Posted 10 hours ago

No clicks

Compensation
Not specified

Currency: Not specified

City
Not specified
Country
Not specified

Perplexity is seeking a Data Scientist to architect and maintain automated evaluation pipelines to assess answer quality across its products. You will design evaluation sets to measure the impact of tool calls and web search retrieval, develop VLM-based methods to evaluate visual rendering across platforms, and review benchmarks to inform product performance. You will operate in a small, high-impact team where evaluation metrics directly shape product changes, collaborating with technical leadership to improve Answer Quality.

Perplexity serves tens of millions of users daily with reliable, high-quality answers grounded in an LLM-first search engine and our specialized data sources. We aim to use the latest models as they are released, but the intelligence frontier is a jagged one, and popular benchmarks do not effectively cover our use cases. In this role, you will build specialized evals to improve answer quality across Perplexity, covering search-based LLM answers and other scenarios popular with our users.

Responsibilities

  • Architect and maintain automated evaluation pipelines to assess answer quality across Perplexity's products, ensuring high standards for accuracy and helpfulness

  • Design evaluation sets and methods specifically to measure the impact of tool calls (particularly web search retrieval) on the final answer's quality

  • Develop VLM-based solutions to programmatically evaluate how final answers render visually across different platforms and devices

  • Continuously review public benchmarks and academic evaluations for their applicability to the Perplexity product, adapting and incorporating them into our regular performance measurements

  • Operate within a small, high-impact team where your evaluation metrics directly shape product changes, collaborating closely with technical leadership to measure and improve Answer Quality

Qualifications

  • PhD or MS in a technical field or equivalent experience

  • 4+ years of experience in data science or machine learning

  • Strong proficiency in Python and SQL (expected to write production-grade code)

  • Experience building within a modern cloud data stack, specifically AWS and Databricks

  • Comfortable with agentic coding workflows and using AI-assisted development tools to iterate faster

Preferred Qualifications

  • 1+ years of experience working with LLMs at scale, specifically with LLM-as-a-judge setups

  • Prior experience working on customer-facing web products or consumer apps, with real user traffic at scale

  • A strong research background, with experience applying research methods to real-world ML problems

  • Experience defining evaluation metrics (e.g., factual consistency, hallucination rate, retrieval precision) and building ground truth datasets

Data Scientist, Evals

at Perplexity AI

Back to all Data Science / AI / ML jobs
Perplexity AI logo
Industry not specified

Data Scientist, Evals

at Perplexity AI

Mid LevelNo visa sponsorshipData Science/AI/ML

Posted 10 hours ago

No clicks

Compensation
Not specified

Currency: Not specified

City
Not specified
Country
Not specified

Perplexity is seeking a Data Scientist to architect and maintain automated evaluation pipelines to assess answer quality across its products. You will design evaluation sets to measure the impact of tool calls and web search retrieval, develop VLM-based methods to evaluate visual rendering across platforms, and review benchmarks to inform product performance. You will operate in a small, high-impact team where evaluation metrics directly shape product changes, collaborating with technical leadership to improve Answer Quality.

Perplexity serves tens of millions of users daily with reliable, high-quality answers grounded in an LLM-first search engine and our specialized data sources. We aim to use the latest models as they are released, but the intelligence frontier is a jagged one, and popular benchmarks do not effectively cover our use cases. In this role, you will build specialized evals to improve answer quality across Perplexity, covering search-based LLM answers and other scenarios popular with our users.

Responsibilities

  • Architect and maintain automated evaluation pipelines to assess answer quality across Perplexity's products, ensuring high standards for accuracy and helpfulness

  • Design evaluation sets and methods specifically to measure the impact of tool calls (particularly web search retrieval) on the final answer's quality

  • Develop VLM-based solutions to programmatically evaluate how final answers render visually across different platforms and devices

  • Continuously review public benchmarks and academic evaluations for their applicability to the Perplexity product, adapting and incorporating them into our regular performance measurements

  • Operate within a small, high-impact team where your evaluation metrics directly shape product changes, collaborating closely with technical leadership to measure and improve Answer Quality

Qualifications

  • PhD or MS in a technical field or equivalent experience

  • 4+ years of experience in data science or machine learning

  • Strong proficiency in Python and SQL (expected to write production-grade code)

  • Experience building within a modern cloud data stack, specifically AWS and Databricks

  • Comfortable with agentic coding workflows and using AI-assisted development tools to iterate faster

Preferred Qualifications

  • 1+ years of experience working with LLMs at scale, specifically with LLM-as-a-judge setups

  • Prior experience working on customer-facing web products or consumer apps, with real user traffic at scale

  • A strong research background, with experience applying research methods to real-world ML problems

  • Experience defining evaluation metrics (e.g., factual consistency, hallucination rate, retrieval precision) and building ground truth datasets

SIMILAR OPPORTUNITIES

No similar jobs available at the moment.