LOG IN
SIGN UP
Tech Job Finder - Find Software, Technology Sales and Product Manager Jobs.
Sign In
OR continue with e-mail and password
E-mail address
Password
Don't have an account?
Reset password
Join Tech Job Finder
OR continue with e-mail and password
E-mail address
First name
Last name
Username
Password
Confirm Password
How did you hear about us?
By signing up, you agree to our Terms & Conditions and Privacy Policy.

Project Maintainer – DPAI Arena Evaluation Infrastructure

at JetBrains

Back to all Data Engineering jobs
JetBrains logo
Industry not specified

Project Maintainer – DPAI Arena Evaluation Infrastructure

at JetBrains

Mid LevelNo visa sponsorshipData Engineering

Posted 12 hours ago

No clicks

Compensation
Not specified

Currency: Not specified

City
Not specified
Country
Not specified

JetBrains is seeking a Project Maintainer to lead the technical backbone of the DPAI Arena Evaluation Infrastructure. You will design, build, and maintain the evaluation pipeline, integrate agents and runtime environments, and ensure scalability, reproducibility, and extensibility. You will define contribution guidelines, review submissions, and coordinate with product, ML and external contributors to ensure quality and long-term sustainability of the infrastructure. This role emphasizes ownership of the long-term technical health of the project.

At JetBrains, code is our passion. Ever since we started, we have strived to make the strongest, most effective developer tools on earth. By automating routine checks and corrections, our tools speed up production, freeing developers to grow, discover, and create.

We believe in building tools developers love – and we see the next frontier in AI-powered developer tools. The DPAI Arena project aims to define and maintain an open, community-driven benchmark for evaluating AI agents and IDE-embedded AI features at scale. We believe that the benchmark industry is only in its early stages, and we’re planning to take part in the evolution of the field. 

We’re looking for a Project Maintainer to lead the technical backbone of this initiative – managing the evaluation pipeline, integrating agents, and enabling contributions from the broader community.

As part of our team, you will:

  • Design, build, and maintain the evaluation infrastructure pipeline with the Eval Infrastructure team, ensuring it supports the goals of DPAI Arena, including scalability, reproducibility, and extensibility.
  • Integrate new agents, models, tools, and runtime environments into the pipeline, ensuring compatibility, stability, and maintainability across various configurations.
  • Define, document, and maintain contribution guidelines and processes, making it easy and safe for internal teams and external contributors to add new tasks, agents, and evaluation setups.
  • Curate the technical process of contributions by reviewing submissions, validating conformity with standards, coordinating merges, and ensuring the consistency, reproducibility, and quality of evaluation artifacts.
  • Support the growth of a community-driven ecosystem by enabling contributions, maintaining clear documentation and onboarding flows, engaging with contributors, and ensuring long-term sustainability of the infrastructure.

We’ll be happy to bring you on board if you have:

  • Strong experience in building and maintaining evaluation or CI/CD-type infrastructure, pipelines, or related tooling.
  • Comfort working with multiple models, runtime environments, tooling setups.
  • The ability to create flexible, modular, maintainable infrastructure.
  • An understanding of GenAI in the DevTooling domain
  • Proficiency in writing and maintaining clear developer documentation, contribution guidelines, and onboarding processes.
  • Good coding and system design skills, including the ability to work with codebases, integrate new components, and manage versioning and configurations.
  • Meticulous attention to detail and a quality assurance mindset.
  • The ability to coordinate with different teams (product, ML, external contributors) and ensure smooth collaboration.
  • A self-motivated and proactive mindset in taking ownership of the long-term technical health of the project.

We’ll be especially thrilled if you have:

  • Experience with AI-agent integration, multi-model pipelines, and benchmarking frameworks.
  • Familiarity with open-source and community-driven development workflows, contribution processes, and code review practices.

#LI-KP1

We process the data provided in your job application in accordance with the Recruitment Privacy Policy.

Project Maintainer – DPAI Arena Evaluation Infrastructure

at JetBrains

Back to all Data Engineering jobs
JetBrains logo
Industry not specified

Project Maintainer – DPAI Arena Evaluation Infrastructure

at JetBrains

Mid LevelNo visa sponsorshipData Engineering

Posted 12 hours ago

No clicks

Compensation
Not specified

Currency: Not specified

City
Not specified
Country
Not specified

JetBrains is seeking a Project Maintainer to lead the technical backbone of the DPAI Arena Evaluation Infrastructure. You will design, build, and maintain the evaluation pipeline, integrate agents and runtime environments, and ensure scalability, reproducibility, and extensibility. You will define contribution guidelines, review submissions, and coordinate with product, ML and external contributors to ensure quality and long-term sustainability of the infrastructure. This role emphasizes ownership of the long-term technical health of the project.

At JetBrains, code is our passion. Ever since we started, we have strived to make the strongest, most effective developer tools on earth. By automating routine checks and corrections, our tools speed up production, freeing developers to grow, discover, and create.

We believe in building tools developers love – and we see the next frontier in AI-powered developer tools. The DPAI Arena project aims to define and maintain an open, community-driven benchmark for evaluating AI agents and IDE-embedded AI features at scale. We believe that the benchmark industry is only in its early stages, and we’re planning to take part in the evolution of the field. 

We’re looking for a Project Maintainer to lead the technical backbone of this initiative – managing the evaluation pipeline, integrating agents, and enabling contributions from the broader community.

As part of our team, you will:

  • Design, build, and maintain the evaluation infrastructure pipeline with the Eval Infrastructure team, ensuring it supports the goals of DPAI Arena, including scalability, reproducibility, and extensibility.
  • Integrate new agents, models, tools, and runtime environments into the pipeline, ensuring compatibility, stability, and maintainability across various configurations.
  • Define, document, and maintain contribution guidelines and processes, making it easy and safe for internal teams and external contributors to add new tasks, agents, and evaluation setups.
  • Curate the technical process of contributions by reviewing submissions, validating conformity with standards, coordinating merges, and ensuring the consistency, reproducibility, and quality of evaluation artifacts.
  • Support the growth of a community-driven ecosystem by enabling contributions, maintaining clear documentation and onboarding flows, engaging with contributors, and ensuring long-term sustainability of the infrastructure.

We’ll be happy to bring you on board if you have:

  • Strong experience in building and maintaining evaluation or CI/CD-type infrastructure, pipelines, or related tooling.
  • Comfort working with multiple models, runtime environments, tooling setups.
  • The ability to create flexible, modular, maintainable infrastructure.
  • An understanding of GenAI in the DevTooling domain
  • Proficiency in writing and maintaining clear developer documentation, contribution guidelines, and onboarding processes.
  • Good coding and system design skills, including the ability to work with codebases, integrate new components, and manage versioning and configurations.
  • Meticulous attention to detail and a quality assurance mindset.
  • The ability to coordinate with different teams (product, ML, external contributors) and ensure smooth collaboration.
  • A self-motivated and proactive mindset in taking ownership of the long-term technical health of the project.

We’ll be especially thrilled if you have:

  • Experience with AI-agent integration, multi-model pipelines, and benchmarking frameworks.
  • Familiarity with open-source and community-driven development workflows, contribution processes, and code review practices.

#LI-KP1

We process the data provided in your job application in accordance with the Recruitment Privacy Policy.

SIMILAR OPPORTUNITIES

No similar jobs available at the moment.