LOG IN
SIGN UP
Tech Job Finder - Find Software, Technology Sales and Product Manager Jobs.
Sign In
OR continue with e-mail and password
E-mail address
Password
Don't have an account?
Reset password
Join Tech Job Finder
OR continue with e-mail and password
E-mail address
First name
Last name
Username
Password
Confirm Password
How did you hear about us?
By signing up, you agree to our Terms & Conditions and Privacy Policy.

Lead Data Engineer

at J.P. Morgan

Back to all Data Engineering jobs
J.P. Morgan logo
Bulge Bracket Investment Banks

Lead Data Engineer

at J.P. Morgan

Tech LeadNo visa sponsorshipData Engineering

Posted a month ago

No clicks

Compensation
Not specified

Currency: Not specified

City
Columbus
Country
United States

Senior data engineering role responsible for leading data management strategies and building scalable ETL/ELT pipelines to support business analytics. You will design, develop, and optimize data processing workflows using SQL, Python, and PySpark on data lake platforms such as Databricks/Spark, ensuring data quality, security, and lineage. The role includes migrating and modernizing legacy systems to cloud-based warehouses, troubleshooting pipeline performance, and documenting data flows while collaborating with business and technical stakeholders as part of an agile team.

Location: Columbus, OH, United States

Join us as we embark on a journey of collaboration and innovation, where your unique skills and talents will be valued and celebrated. Together we will create a brighter future and make a meaningful difference.

As a Lead Data Engineer at JPMorgan Chase within the Global Technology Enterprise Software Asset Management team, you are an integral part of an agile team that works to enhance, build, and deliver data collection, storage, access, and analytics solutions in a secure, stable, and scalable way. As a core technical contributor, you are responsible for maintaining critical data pipelines and architectures across multiple technical areas within various business functions in support of the firm’s business objectives.

Job Responsibilities

  • Lead data management strategies in collaboration with business stakeholders, transforming data into insights that drive strategic decisions and organizational actions.
  • Design, develop, and optimize ETL/ELT pipelines using SQL, Python, and PySpark for large-scale, complex data environments.
  • Implement scalable data processing workflows in data lake platforms such as Databricks or Spark, ensuring efficient and reliable data operations.
  • Ensure data quality, consistency, security, and lineage throughout all stages of data processing and transformation.
  • Support data migration and modernization initiatives, transitioning legacy systems to cloud-based data warehouses.
  • Document data flows, logic, and transformation rules to maintain transparency and facilitate knowledge sharing across teams.
  • Troubleshoot and resolve performance and quality issues in both batch and real-time data pipelines.
  • Review existing data challenges and deliver comprehensive solutions by applying appropriate data strategies and tools.

Required Qualifications, capabilities and skills

  • Proven experience in data management, ETL/ELT pipeline development, and large-scale data processing.
  • Proficiency in SQL, Python, and PySpark.
  • Hands-on experience with data lake platforms (Databricks, Spark, or similar).
  • Strong understanding of data quality, security, and lineage best practices.
  • Experience with cloud-based data warehouse migration and modernization.
  • Excellent problem-solving and troubleshooting skills.
  • Strong communication and documentation abilities.
  • Ability to collaborate effectively with business and technical stakeholders.
Maintain critical data pipelines and architectures across multiple technical areas as an integral part of an agile team

Lead Data Engineer

at J.P. Morgan

Back to all Data Engineering jobs
J.P. Morgan logo
Bulge Bracket Investment Banks

Lead Data Engineer

at J.P. Morgan

Tech LeadNo visa sponsorshipData Engineering

Posted a month ago

No clicks

Compensation
Not specified

Currency: Not specified

City
Columbus
Country
United States

Senior data engineering role responsible for leading data management strategies and building scalable ETL/ELT pipelines to support business analytics. You will design, develop, and optimize data processing workflows using SQL, Python, and PySpark on data lake platforms such as Databricks/Spark, ensuring data quality, security, and lineage. The role includes migrating and modernizing legacy systems to cloud-based warehouses, troubleshooting pipeline performance, and documenting data flows while collaborating with business and technical stakeholders as part of an agile team.

Location: Columbus, OH, United States

Join us as we embark on a journey of collaboration and innovation, where your unique skills and talents will be valued and celebrated. Together we will create a brighter future and make a meaningful difference.

As a Lead Data Engineer at JPMorgan Chase within the Global Technology Enterprise Software Asset Management team, you are an integral part of an agile team that works to enhance, build, and deliver data collection, storage, access, and analytics solutions in a secure, stable, and scalable way. As a core technical contributor, you are responsible for maintaining critical data pipelines and architectures across multiple technical areas within various business functions in support of the firm’s business objectives.

Job Responsibilities

  • Lead data management strategies in collaboration with business stakeholders, transforming data into insights that drive strategic decisions and organizational actions.
  • Design, develop, and optimize ETL/ELT pipelines using SQL, Python, and PySpark for large-scale, complex data environments.
  • Implement scalable data processing workflows in data lake platforms such as Databricks or Spark, ensuring efficient and reliable data operations.
  • Ensure data quality, consistency, security, and lineage throughout all stages of data processing and transformation.
  • Support data migration and modernization initiatives, transitioning legacy systems to cloud-based data warehouses.
  • Document data flows, logic, and transformation rules to maintain transparency and facilitate knowledge sharing across teams.
  • Troubleshoot and resolve performance and quality issues in both batch and real-time data pipelines.
  • Review existing data challenges and deliver comprehensive solutions by applying appropriate data strategies and tools.

Required Qualifications, capabilities and skills

  • Proven experience in data management, ETL/ELT pipeline development, and large-scale data processing.
  • Proficiency in SQL, Python, and PySpark.
  • Hands-on experience with data lake platforms (Databricks, Spark, or similar).
  • Strong understanding of data quality, security, and lineage best practices.
  • Experience with cloud-based data warehouse migration and modernization.
  • Excellent problem-solving and troubleshooting skills.
  • Strong communication and documentation abilities.
  • Ability to collaborate effectively with business and technical stakeholders.
Maintain critical data pipelines and architectures across multiple technical areas as an integral part of an agile team