
Senior Lead Software Engineer, SRE - Data Platforms
at J.P. Morgan
Posted 18 hours ago
No clicks
- Compensation
- Not specified
- City
- Jersey City
- Country
- United States
Currency: Not specified
Senior Lead Software Engineer - SRE to design, implement, and operate managed AWS Databricks data platforms, providing engineering and operational support for data engineering, ML, and application teams. Drive SRE best practices including SLIs/SLOs, observability, incident response, and automation to improve reliability and capacity planning. Collaborate with cross-functional teams, evaluate vendor solutions, and develop infrastructure as code (Terraform) and CI/CD pipelines while writing production-quality Python code. Focus areas include large-scale data processing (Spark), platform administration, and operational excellence in a cloud environment.
Location: Jersey City, NJ, United States
We have an opportunity to impact your career and provide an adventure where you can push the limits of what's possible.
The Chief Data & Analytics Office (CDAO) at JPMorgan Chase is responsible for accelerating the firm’s data and analytics journey. This includes ensuring the quality, integrity, and security of the company's data, as well as leveraging this data to generate insights and drive decision-making. The CDAO is also responsible for developing and implementing solutions that support the firm’s commercial goals by harnessing artificial intelligence and machine learning technologies to develop new products, improve productivity, and enhance risk management effectively and responsibly.
As a Senior Lead Software Engineer - SRE at JPMorgan Chase within the AIML Data Platforms and Chief Data and Analytics Team, you will develop and deliver advanced technology products focused on data and analytics. Tackle complex cloud data platform challenges, especially around Datalakes Tools. In this role you will work in an agile environment, collaborating with cross-functional teams.
Job responsibilities
- Designs, implements, and maintains a managed AWS Databricks platform, and provides engineering and operational support for the platform to SRE and app teams.
- Performs platform design, set-up and configuration, workspace administration, resource monitoring, providing engineering support to data engineering teams, Data Science/ML, and Application/integration teams.
- Leads evaluation sessions with external vendors, startups, and internal teams to drive outcomes-oriented probing of architectural designs, technical credentials, and applicability for use within existing systems and information architecture.
- Drives continuous improvement in system observability, alerting, and capacity planning.
- Collaborates with engineering and data teams to optimize infrastructure and deployment processes, focusing on automation and operational excellence.
- Executes creative software solutions, design, development, and technical troubleshooting with ability to think beyond routine or conventional approaches to build solutions or break down technical problems.
- Develops secure high-quality production code, and reviews and debugs code written by others.
- Identifies opportunities to eliminate or automate remediation of recurring issues to improve overall operational stability of software applications and systems.
- Adds to team culture of diversity, opportunity, and respect.
- Implements Site Reliability Engineering (SRE) best practices to ensure reliability, scalability, and performance of data platforms.
- Develops and maintains incident response procedures, including root cause analysis and postmortem documentation.
Required qualifications, capabilities, and skills
- Formal training or certification on software engineering concepts and 5+ years applied experience.
- Extensive experience with AWS Databricks platform administration and engineering support is a MUST.
- Strong understanding of SRE principles, including SLIs, SLOs, error budgets, and incident management.
- Experience with monitoring tools, automation frameworks, and CI/CD pipelines.
- Proficient in Python application program development with use of automated unit testing.
- Experience with terraform development and understanding of terraform enterprise.
- Experience in delivering system design, application development, testing, and operational stability.
- Knowledge of Big Data distributed compute frameworks like Spark, Glue, MapReduce etc.
- Excellent troubleshooting, analytical, and communication skills.
Preferred qualifications, capabilities, and skills
- Experience in Data pipelines using Spark.
- Exposure to AWS & Databricks Platform administration.
- Knowledge of containerization (Docker, Kubernetes) and orchestration.
- Familiarity with distributed systems and large-scale data processing.
#CDAOdp
Drive significant business impact and tackle a diverse array of challenges that span multiple technologies and applications



