
Data Engineer
at J.P. Morgan
Posted 15 days ago
No clicks
- Compensation
- Not specified
- City
- Dublin
- Country
- Ireland
Currency: Not specified
As a Data Engineer on an agile team you will design, build and maintain secure, scalable data collection, storage and analytics solutions and ELT pipelines. You will develop and test data pipelines using Python and Databricks, implement data security and entitlements, and update logical and physical data models to support business use cases. The role requires strong SQL and NoSQL understanding, experience across the data lifecycle, and working knowledge of AWS and software development lifecycle/CI/CD tools.
Location: Dublin, Ireland
Are you ready to shape the future of data engineering at JPMorgan Chase? Join a dynamic team where your unique skills will help build innovative solutions and contribute to a winning culture. You’ll have opportunities for career growth, collaborate with talented professionals, and make a real impact on our business objectives. Your expertise will empower our teams and drive success across the firm.
As a Data Engineer in our agile team, you will design and deliver reliable data collection, storage, access, and analytics solutions that are secure, stable, and scalable. You will develop, test, and maintain essential data pipelines and architectures, supporting various business functions to achieve the firm’s goals. Working with us, you will use your skills to drive innovation and help shape our team culture. Together, we focus on excellence, collaboration, and continuous improvement.
Job responsibilities
- Develop workflows and ELT pipelines using Python and Databricks.
- Support review of controls to ensure sufficient protection of enterprise data.
- Implement data security using entitlements frameworks.
- Update logical or physical data models based on new use cases.
- Use SQL frequently and understand NoSQL databases
Required qualifications, capabilities, and skills
- Formal training or certification on software engineering concepts and 3 years applied experience.
- Good working knowledge of AWS, Databricks, and Python.
- Experience across the data lifecycle.
- Advanced at SQL, including joins and aggregations.
- Working understanding of NoSQL databases.
- Significant experience with statistical data analysis and ability to determine appropriate tools and data patterns for analysis.
- Utilize AWS Cloud Services for developing, deploying, and managing applications at scale.
- Good understanding and working knowledge of software development lifecycle tools used for configuration management, CI/CD pipelines, unit testing, regression testing, and performance testing.
Preferred qualifications, capabilities, and skills
- MLOps skills.





