
Data Engineer II
at J.P. Morgan
Posted 18 days ago
No clicks
- Compensation
- Not specified
- City
- Bengaluru
- Country
- India
Currency: Not specified
Join the Employee Platforms team as a Data Engineer II to design, develop, and support Databricks-based data pipelines and integrations. You will write production-quality Python code, implement CI/CD automation, and troubleshoot pipeline and integration issues. The role involves collaborating with internal teams to promote Databricks best practices, documenting technical solutions, and contributing to a secure, scalable data platform. Continuous learning and participation in agile team activities are expected.
Location: Bengaluru, Karnataka, India
You thrive on diversity and creativity, and we welcome individuals who share our vision of making a lasting impact. Your unique combination of design thinking and experience will help us achieve new heights.
As a Data Engineer II at JPMorgan Chase within the Employee Platforms team, you are part of an agile team that works to enhance, design, and deliver the data collection, storage, access, and analytics solutions in a secure, stable, and scalable way. As an emerging member of a data engineering team, you are responsible for overseeing Databricks adoption and supporting internal teams as they integrate with Databricks. As a Software Engineer, you will collaborate with colleagues across the organization to deliver secure, scalable, and reliable technology solutions that drive business success.
Job responsibilities
Required qualifications, capabilities, and skills
- Formal training or certification onsoftware engineering concepts and 2+ years applied experience
- Show proficiency in Python programming
- Build and maintain data pipelines using Databricks
- Apply practical experience with CI/CD tools and automation methods
- Exhibit familiarity with agile development practices, application resiliency, and security
- Participate in code reviews and debugging activities
- Collaborate with teams to implement best practices for Databricks and data engineering
- Troubleshoot and resolve technical issues in data pipelines
- Document and communicate technical solutions effectively
- Engage in continuous improvement and learning within the team environment
Preferred Qualifications, Capabilities, and Skills- Demonstrate experience with AWS services such as S3, EMR, Glue, ECS/EKS, and Athena
- Obtain certifications in AWS, Databricks, or automation tools
- Gain exposure to open table formats like Iceberg or Delta Lake and data catalog tools such as AWS Glue Data Catalog
- Pursue interests in cloud computing, artificial intelligence, or mobile development
Be part of an agile team that works to enhance, design, and deliver data collection, storage, access, and analytics solutions
Data Engineer II
at J.P. Morgan

Data Engineer II
at J.P. Morgan
Posted 18 days ago
No clicks
- Compensation
- Not specified
- City
- Bengaluru
- Country
- India
Currency: Not specified
Join the Employee Platforms team as a Data Engineer II to design, develop, and support Databricks-based data pipelines and integrations. You will write production-quality Python code, implement CI/CD automation, and troubleshoot pipeline and integration issues. The role involves collaborating with internal teams to promote Databricks best practices, documenting technical solutions, and contributing to a secure, scalable data platform. Continuous learning and participation in agile team activities are expected.
Location: Bengaluru, Karnataka, India
You thrive on diversity and creativity, and we welcome individuals who share our vision of making a lasting impact. Your unique combination of design thinking and experience will help us achieve new heights.
As a Data Engineer II at JPMorgan Chase within the Employee Platforms team, you are part of an agile team that works to enhance, design, and deliver the data collection, storage, access, and analytics solutions in a secure, stable, and scalable way. As an emerging member of a data engineering team, you are responsible for overseeing Databricks adoption and supporting internal teams as they integrate with Databricks. As a Software Engineer, you will collaborate with colleagues across the organization to deliver secure, scalable, and reliable technology solutions that drive business success.
Job responsibilities
Required qualifications, capabilities, and skills
- Formal training or certification onsoftware engineering concepts and 2+ years applied experience
- Show proficiency in Python programming
- Build and maintain data pipelines using Databricks
- Apply practical experience with CI/CD tools and automation methods
- Exhibit familiarity with agile development practices, application resiliency, and security
- Participate in code reviews and debugging activities
- Collaborate with teams to implement best practices for Databricks and data engineering
- Troubleshoot and resolve technical issues in data pipelines
- Document and communicate technical solutions effectively
- Engage in continuous improvement and learning within the team environment
Preferred Qualifications, Capabilities, and Skills- Demonstrate experience with AWS services such as S3, EMR, Glue, ECS/EKS, and Athena
- Obtain certifications in AWS, Databricks, or automation tools
- Gain exposure to open table formats like Iceberg or Delta Lake and data catalog tools such as AWS Glue Data Catalog
- Pursue interests in cloud computing, artificial intelligence, or mobile development
Be part of an agile team that works to enhance, design, and deliver data collection, storage, access, and analytics solutions




