
Data Engineer II
at J.P. Morgan
Posted 18 days ago
No clicks
- Compensation
- Not specified
- City
- Bengaluru
- Country
- India
Currency: Not specified
Join an agile Consumer & Community Banking data technology team to design and deliver secure, scalable data collection, storage, access, and analytics solutions. You will develop, test, and maintain critical data pipelines and architectures, update data models, and help ensure data controls and protection. The role requires strong SQL and working knowledge of NoSQL, plus experience with PySpark, Ab Initio, Snowflake and AWS. You will also customize tools and perform statistical data analysis to support business use cases.
Location: Bengaluru, Karnataka, India
Be part of a dynamic team where your distinctive skills will contribute to a winning culture and team.
As a Data Engineer II at JPMorganChase within the Consumer and community banking - Data Technology, you serve as a seasoned member of an agile team to design and deliver trusted data collection, storage, access, and analytics solutions in a secure, stable, and scalable way. You are responsible for developing, testing, and maintaining critical data pipelines and architectures across multiple technical areas within various business functions in support of the firm’s business objectives.
Job responsibilities
- Supports review of controls to ensure sufficient protection of enterprise data
- Advises and makes custom configuration changes in one to two tools to generate a product at the business or customer request
- Updates logical or physical data models based on new use cases
- Frequently uses SQL and understands NoSQL databases and their niche in the marketplace
- Adds to team culture of diversity, opportunity, inclusion, and respect
Required qualifications, capabilities, and skills
- Formal training or certification on software engineering concepts and 2+ years applied experience
- Experience across the data lifecycle
- Advanced at SQL (e.g., joins and aggregations)
- Working understanding of NoSQL databases
- Significant experience with statistical data analysis and ability to determine appropriate tools and data patterns to perform analysis
- Experience customizing changes in a tool to generate product
- PySpark , Abinitio , Snowflake and AWS



