LOG IN
SIGN UP
Tech Job Finder - Find Software, Technology Sales and Product Manager Jobs.
Sign In
OR continue with e-mail and password
E-mail address
Password
Don't have an account?
Reset password
Join Tech Job Finder
OR continue with e-mail and password
E-mail address
First name
Last name
Username
Password
Confirm Password
How did you hear about us?
By signing up, you agree to our Terms & Conditions and Privacy Policy.

Lead Software Engineer - Data

at J.P. Morgan

Back to all Data Engineering jobs
J.P. Morgan logo
Bulge Bracket Investment Banks

Lead Software Engineer - Data

at J.P. Morgan

Tech LeadNo visa sponsorshipData Engineering

Posted a month ago

No clicks

Compensation
Not specified

Currency: Not specified

City
Glasgow
Country
United Kingdom

Lead Software Engineer - Data at JPMorgan Chase in Glasgow responsible for designing, developing and maintaining scalable data processing solutions using Databricks, Python, PySpark, SQL and AWS. You will lead development of ETL/ELT pipelines and fact/dimension data models, ensure data quality, lineage and security, and support cloud migration and modernization. The role requires technical leadership across vendors and communities of practice, troubleshooting and automation of recurring issues, and close collaboration with business stakeholders to turn data into actionable insights.

Location: GLASGOW, LANARKSHIRE, United Kingdom

We have an opportunity to impact your career and provide an adventure where you can push the limits of what's possible. Join our innovative Capital Analytics team at JPMorganChase where we leverage cutting-edge technology to drive data-driven decision-making and enhance business performance. We are seeking a talented and motivated Software/Data Engineer to join our team and contribute to our mission of transforming data into actionable insights.

 

As a Lead Software Engineer at JPMorgan Chase within the Capital Technology team, you will play a crucial role in designing, developing, and maintaining scalable data processing solutions using Databricks, Python, and AWS. You will collaborate with cross-functional teams to deliver high-quality data solutions that support our business objectives.

 

Job responsibilities

  • Execute creative, data-driven software solutions, including design, development, and technical troubleshooting, with the ability to think beyond routine approaches to solve technical problems.
  • Design and implement data pipelines and scalable data processing workflows using Python, PySpark, SQL, and Databricks or Spark for large-scale, complex data environments.
  • Develop fact and dimension data models for reporting and analytics.
  • Write secure, high-quality production code, and review and debug code written by others.
  • Identify and automate remediation of recurring issues to improve the operational stability of software applications and systems.
  • Lead evaluation sessions with external vendors, startups, and internal teams to assess architectural designs, technical credentials, and applicability within existing systems.
  • Lead communities of practice across Software Engineering to promote awareness and adoption of new technologies. Foster a team culture of diversity, opportunity, inclusion, and respect.
  • Collaborate with business stakeholders to develop data management strategies, transforming data into insights that drive strategic decisions.
  • Ensure data quality, consistency, security, and lineage throughout all stages of data processing and transformation as well as supporting data migration and modernization initiatives, transitioning legacy systems to cloud-based data warehouses.
  • Document data flows, logic, and transformation rules to maintain transparency and facilitate knowledge sharing.
  • Troubleshoot and resolve performance and quality issues in both batch and real-time data pipelines. Deliver comprehensive solutions to data challenges by applying appropriate data strategies and tools.

 

Required qualifications, capabilities, and skills

  • Proven experience in data management, ETL/ELT pipeline development, and large-scale data processing.
  • Proficiency in SQL, Python, and PySpark.
  • Hands-on experience with data lake platforms (Databricks, Spark, or similar).
  • Strong understanding of data quality, security, and lineage best practices.
  • Experience with cloud-based data warehouse migration and modernization.
  • Excellent problem-solving and troubleshooting skills.
  • Strong communication and documentation abilities.
  • Ability to collaborate effectively with business and technical stakeholders.
Carry out critical tech solutions across multiple technical areas as an integral part of an agile team.

Lead Software Engineer - Data

at J.P. Morgan

Back to all Data Engineering jobs
J.P. Morgan logo
Bulge Bracket Investment Banks

Lead Software Engineer - Data

at J.P. Morgan

Tech LeadNo visa sponsorshipData Engineering

Posted a month ago

No clicks

Compensation
Not specified

Currency: Not specified

City
Glasgow
Country
United Kingdom

Lead Software Engineer - Data at JPMorgan Chase in Glasgow responsible for designing, developing and maintaining scalable data processing solutions using Databricks, Python, PySpark, SQL and AWS. You will lead development of ETL/ELT pipelines and fact/dimension data models, ensure data quality, lineage and security, and support cloud migration and modernization. The role requires technical leadership across vendors and communities of practice, troubleshooting and automation of recurring issues, and close collaboration with business stakeholders to turn data into actionable insights.

Location: GLASGOW, LANARKSHIRE, United Kingdom

We have an opportunity to impact your career and provide an adventure where you can push the limits of what's possible. Join our innovative Capital Analytics team at JPMorganChase where we leverage cutting-edge technology to drive data-driven decision-making and enhance business performance. We are seeking a talented and motivated Software/Data Engineer to join our team and contribute to our mission of transforming data into actionable insights.

 

As a Lead Software Engineer at JPMorgan Chase within the Capital Technology team, you will play a crucial role in designing, developing, and maintaining scalable data processing solutions using Databricks, Python, and AWS. You will collaborate with cross-functional teams to deliver high-quality data solutions that support our business objectives.

 

Job responsibilities

  • Execute creative, data-driven software solutions, including design, development, and technical troubleshooting, with the ability to think beyond routine approaches to solve technical problems.
  • Design and implement data pipelines and scalable data processing workflows using Python, PySpark, SQL, and Databricks or Spark for large-scale, complex data environments.
  • Develop fact and dimension data models for reporting and analytics.
  • Write secure, high-quality production code, and review and debug code written by others.
  • Identify and automate remediation of recurring issues to improve the operational stability of software applications and systems.
  • Lead evaluation sessions with external vendors, startups, and internal teams to assess architectural designs, technical credentials, and applicability within existing systems.
  • Lead communities of practice across Software Engineering to promote awareness and adoption of new technologies. Foster a team culture of diversity, opportunity, inclusion, and respect.
  • Collaborate with business stakeholders to develop data management strategies, transforming data into insights that drive strategic decisions.
  • Ensure data quality, consistency, security, and lineage throughout all stages of data processing and transformation as well as supporting data migration and modernization initiatives, transitioning legacy systems to cloud-based data warehouses.
  • Document data flows, logic, and transformation rules to maintain transparency and facilitate knowledge sharing.
  • Troubleshoot and resolve performance and quality issues in both batch and real-time data pipelines. Deliver comprehensive solutions to data challenges by applying appropriate data strategies and tools.

 

Required qualifications, capabilities, and skills

  • Proven experience in data management, ETL/ELT pipeline development, and large-scale data processing.
  • Proficiency in SQL, Python, and PySpark.
  • Hands-on experience with data lake platforms (Databricks, Spark, or similar).
  • Strong understanding of data quality, security, and lineage best practices.
  • Experience with cloud-based data warehouse migration and modernization.
  • Excellent problem-solving and troubleshooting skills.
  • Strong communication and documentation abilities.
  • Ability to collaborate effectively with business and technical stakeholders.
Carry out critical tech solutions across multiple technical areas as an integral part of an agile team.