
Data Engineer (Associate/Senior Associate) – Security Services Data Modelling and Engineering Team
at J.P. Morgan
Posted a day ago
No clicks
- Compensation
- Not specified
- City
- Bengaluru
- Country
- India
Currency: Not specified
Join the Security Services Data Modelling and Engineering team to design and develop scalable data pipelines and reusable datasets on Databricks, enabling business insights, OKR tracking, and AI enablement across JPMorgan Chase’s Security Services. As a Data Engineer, you will collaborate with Data Architects and Business Analysts to deliver high-quality, compliant, and business-focused data solutions and document data flows and ETL logic. You will implement data quality checks using Python, register datasets and pipelines in governance catalogues, and track work in Jira. Location Bengaluru, Karnataka, India.
Location: Bengaluru, Karnataka, India
Be part of a team that creates the strategic data assets driving business insight, operational excellence, and the next generation of AI solutions. Your work will directly enable the business to answer key questions, track progress on objectives, and unlock new opportunities through data.
As a Data Engineer in the Security Services Data Modelling and Engineering Team, you will play a pivotal role in building the data foundation that promotes business insights, OKR tracking, and AI enablement across JPMorgan Chase’s Security Services businesses. You will be responsible for designing and developing scalable data pipelines and reusable datasets on Databricks, working closely with Data Architects and Business Analysts to deliver high-quality, compliant, and business-focused solutions.
Job responsibilities
- Design, build, and optimize data pipelines and transformation workflows on Databricks, leveraging Python and Spark.
- Collaborate with Data Architects and Business Analysts to develop robust data models and clearly document data flows and ETL logic.
- Implement and execute data quality checks and validation modules using Python.
- Maintain transparency and accountability by tracking work and progress in Jira.
- Ensure datasets and pipelines are accurately registered in relevant catalogues and consoles, meeting governance and privacy standards.
Required qualifications, capabilities and skills
- Proven experience developing data pipelines and solutions on Databricks.
- Strong proficiency in Python, including libraries for data transformation (e.g., pandas).
- Solid understanding of ETL concepts, data modelling, and pipeline design.
- Experience with Spark and cloud data platforms.
- Ability to document data flows and transformation logic to a high standard.
- Familiarity with project management tools such as Jira.
- Collaborative mindset and strong communication skills.
Preferred qualifications, capabilities and skills
- Experience in financial services or large enterprise data environments.
- Knowledge of data governance, privacy, and compliance requirements.
- Exposure to business analysis and requirements gathering.

