LOG IN
SIGN UP
Tech Job Finder - Find Software, Technology Sales and Product Manager Jobs.
Sign In
OR continue with e-mail and password
E-mail address
Password
Don't have an account?
Reset password
Join Tech Job Finder
OR continue with e-mail and password
E-mail address
First name
Last name
Username
Password
Confirm Password
How did you hear about us?
By signing up, you agree to our Terms & Conditions and Privacy Policy.

Lead Software Engineer - Data Engineering

at J.P. Morgan

Back to all Data Engineering jobs
J.P. Morgan logo
Bulge Bracket Investment Banks

Lead Software Engineer - Data Engineering

at J.P. Morgan

Tech LeadNo visa sponsorshipData Engineering

Posted a month ago

No clicks

Compensation
Not specified

Currency: Not specified

City
London
Country
United Kingdom

Hands-on Senior Lead Engineer joining JPMorgan Chase's Chase UK data team to design and deliver cloud-native data ingestion and processing solutions. You will build and optimise ETL/ELT pipelines (batch and streaming), work with Kafka, orchestration tools and lakehouse/warehouse platforms, and ensure data quality, security and compliance. The role collaborates within product-focused squads and requires strong programming (Python and a JVM language), SQL, and cloud experience to produce reliable, cost-effective data architectures.

Location: LONDON, LONDON, United Kingdom

Out of the successful launch of Chase UK in 2021, we’re a new team, with a new mission. We’re creating products that solve real world problems and put customers at the center -  all in an environment that nurtures skills and helps you realize your potential. Our team is key to our success. We’re people-first. We value collaboration, curiosity and commitment. 

As a hands-on Senior Lead Engineer at JPMorgan Chase within the International Consumer Bank, you are the heart of this venture, focused on getting smart ideas into the hands of our customers. You have a curious mindset, thrive in collaborative squads, and are passionate about new technology. By your nature, you are also solution-oriented, commercially savvy and have a head for fintech. You thrive in working in tribes and squads that focus on specific products and projects – and depending on your strengths and interests, you'll have the opportunity to move between them.

While we’re looking for professional skills, culture is just as important to us. We understand that everyone's unique – and that diversity of thought, experience and background is what makes a good team, great. By bringing people with different points of view together, we can represent everyone and truly reflect the communities we serve. This way, there's scope for you to make a huge difference – on us as a company, and on our clients and business partners around the world.

Job responsibilities

  • Architecture and implementation: Design and develop scalable and secure distributed architectures and solutions, focusing on data ingestion and processing - utilising appropriate cloud native technologies and services.

  • Data pipeline development: Design, implement, and maintain data pipelines that efficiently collect, process, and store large volumes of data from various sources, ensuring data timeliness, quality, and completeness.

  • Security and compliance: Ensure that data solutions comply with relevant data residency and privacy regulations, and implement best practices for securing data at rest and in transit in compliance with financial regulations and firm wide policies.

Required qualifications, capabilities, and skills

  • Programming: Comfortable with Python and at least one JVM language (Java/Kotlin/Scala) including sound testing and code review practices.

  • SQL expertise: Joins, aggregations, subqueries, window functions

  • Data pipelines: Design, build, and optimise production ETL/ELT pipelines (batch + streaming) using a popular framework (Spark, Flink, Dataflow, etc).

  • Streaming: Hands-on with Kafka (topics, keys, partitions, consumer groups) at-least-once semantics, and schema registry basics.

  • Warehousing/lakehouse: Data modelling, partitioning, clustering. Hands-on with one of BigQuery, Snowflake, Databricks, etc, and cloud storage or HDFS.

  • Cloud: Production experience with at least one major cloud provider (GCP/AWS) using native data services and IAM basics. FinOps-aware with cost-effective design.

  • Reliability: Data quality checks, backfills, incorporating SLIs with observability and reporting.

  • Kafka Connect (sources/sinks), change data capture (CDC), and schema evolution strategies.

  • Orchestrators (Airflow/Dagster/Flyte/Prefect/Argo Workflows) and workflow patterns (dependencies, idempotency, retries, SLAs).

  • Lakehouse platforms and table formats (Delta/Iceberg/Hudi/Avro/Parquet) and time-travel.

  • Security/RBAC, PII handling, and governance basics.

Preferred qualifications, capabilities, and skills

  • AWS/GCP Certifications 

 

Propel fintech innovation with cloud-native data pipelines, streaming, and secure, scalable architectures as a Senior Lead Data Engineer.

Lead Software Engineer - Data Engineering

at J.P. Morgan

Back to all Data Engineering jobs
J.P. Morgan logo
Bulge Bracket Investment Banks

Lead Software Engineer - Data Engineering

at J.P. Morgan

Tech LeadNo visa sponsorshipData Engineering

Posted a month ago

No clicks

Compensation
Not specified

Currency: Not specified

City
London
Country
United Kingdom

Hands-on Senior Lead Engineer joining JPMorgan Chase's Chase UK data team to design and deliver cloud-native data ingestion and processing solutions. You will build and optimise ETL/ELT pipelines (batch and streaming), work with Kafka, orchestration tools and lakehouse/warehouse platforms, and ensure data quality, security and compliance. The role collaborates within product-focused squads and requires strong programming (Python and a JVM language), SQL, and cloud experience to produce reliable, cost-effective data architectures.

Location: LONDON, LONDON, United Kingdom

Out of the successful launch of Chase UK in 2021, we’re a new team, with a new mission. We’re creating products that solve real world problems and put customers at the center -  all in an environment that nurtures skills and helps you realize your potential. Our team is key to our success. We’re people-first. We value collaboration, curiosity and commitment. 

As a hands-on Senior Lead Engineer at JPMorgan Chase within the International Consumer Bank, you are the heart of this venture, focused on getting smart ideas into the hands of our customers. You have a curious mindset, thrive in collaborative squads, and are passionate about new technology. By your nature, you are also solution-oriented, commercially savvy and have a head for fintech. You thrive in working in tribes and squads that focus on specific products and projects – and depending on your strengths and interests, you'll have the opportunity to move between them.

While we’re looking for professional skills, culture is just as important to us. We understand that everyone's unique – and that diversity of thought, experience and background is what makes a good team, great. By bringing people with different points of view together, we can represent everyone and truly reflect the communities we serve. This way, there's scope for you to make a huge difference – on us as a company, and on our clients and business partners around the world.

Job responsibilities

  • Architecture and implementation: Design and develop scalable and secure distributed architectures and solutions, focusing on data ingestion and processing - utilising appropriate cloud native technologies and services.

  • Data pipeline development: Design, implement, and maintain data pipelines that efficiently collect, process, and store large volumes of data from various sources, ensuring data timeliness, quality, and completeness.

  • Security and compliance: Ensure that data solutions comply with relevant data residency and privacy regulations, and implement best practices for securing data at rest and in transit in compliance with financial regulations and firm wide policies.

Required qualifications, capabilities, and skills

  • Programming: Comfortable with Python and at least one JVM language (Java/Kotlin/Scala) including sound testing and code review practices.

  • SQL expertise: Joins, aggregations, subqueries, window functions

  • Data pipelines: Design, build, and optimise production ETL/ELT pipelines (batch + streaming) using a popular framework (Spark, Flink, Dataflow, etc).

  • Streaming: Hands-on with Kafka (topics, keys, partitions, consumer groups) at-least-once semantics, and schema registry basics.

  • Warehousing/lakehouse: Data modelling, partitioning, clustering. Hands-on with one of BigQuery, Snowflake, Databricks, etc, and cloud storage or HDFS.

  • Cloud: Production experience with at least one major cloud provider (GCP/AWS) using native data services and IAM basics. FinOps-aware with cost-effective design.

  • Reliability: Data quality checks, backfills, incorporating SLIs with observability and reporting.

  • Kafka Connect (sources/sinks), change data capture (CDC), and schema evolution strategies.

  • Orchestrators (Airflow/Dagster/Flyte/Prefect/Argo Workflows) and workflow patterns (dependencies, idempotency, retries, SLAs).

  • Lakehouse platforms and table formats (Delta/Iceberg/Hudi/Avro/Parquet) and time-travel.

  • Security/RBAC, PII handling, and governance basics.

Preferred qualifications, capabilities, and skills

  • AWS/GCP Certifications 

 

Propel fintech innovation with cloud-native data pipelines, streaming, and secure, scalable architectures as a Senior Lead Data Engineer.