LOG IN
SIGN UP
Tech Job Finder - Find Software, Technology Sales and Product Manager Jobs.
Sign In
OR continue with e-mail and password
E-mail address
Password
Don't have an account?
Reset password
Join Tech Job Finder
OR continue with e-mail and password
E-mail address
First name
Last name
Username
Password
Confirm Password
How did you hear about us?
By signing up, you agree to our Terms & Conditions and Privacy Policy.

Sr Lead Software Engineer (Data Platform)

at J.P. Morgan

Back to all Data Engineering jobs
J.P. Morgan logo
Industry not specified

Sr Lead Software Engineer (Data Platform)

at J.P. Morgan

Mid LevelNo visa sponsorshipData Engineering

Posted 17 hours ago

No clicks

Compensation
Not specified USD

Currency: $ (USD)

City
Houston
Country
United States

As a Sr Lead Software Engineer within JPMorgan Chase's Commercial & Investment Bank - Digital Client Relationship team, you will drive data collection, storage, access, and analytics in a secure, scalable way. You will design and build hybrid on-prem and public cloud data platform solutions and end-to-end data pipelines for ingestion, transformation, and distribution, supporting both batch and streaming workloads. You will own reusable data products, implement modern data lake/lakehouse architectures (including Apache Iceberg), and enable interoperability across Databricks, Snowflake, Redshift, AWS Glue, and Lake Formation. You will establish end-to-end data lineage, monitor data quality with Great Expectations, provide guidance to junior engineers, and contribute to data governance and disaster recovery planning.

Location: Houston, TX, United States

 

Be an integral part of an Agile team that's constantly pushing the envelope to enhance, build, and deliver top-notch technology products.

As a Senior Lead Software Engineer at JPMorgan Chase, within the Commercial & Investment Bank - Digital Client Relationship team, you are an integral part of an agile team that works to enhance, build, and deliver data collection, storage, access, and analytics in a secure, stable, and scalable way. You drive significant business impact through your capabilities and contributions and apply deep technical expertise and problem-solving methodologies to tackle a diverse array of challenges that span multiple data pipelines, data architectures and other data consumers

Job Responsibilities

  • Designs and builds hybrid on-prem and public cloud data platform solutions
  • Designs and builds end-to-end data pipelines for ingestion, transformation, and distribution, supporting both batch and streaming workloads
  • Develops and owns data products that are reusable, well-documented, and optimized for analytics, BI, and AI/ML consumers
  • Implements and manages modern data lake and Lakehouse architectures, including Apache Iceberg table formats
  • Implements interoperability across data platforms and tools, including Databricks, Snowflake, Amazon Redshift, AWS Glue, and Lake Formation
  • Establishes and maintains end-to-end data lineage to support observability, impact analysis, and regulatory requirements
  • Implements data quality validation and monitoring using frameworks such as Great Expectations
  • Provides recommendations and insight on data management and governance procedures and intricacies applicable to the acquisition, maintenance, validation, and utilization of data
  • Designs and delivers trusted data collection, storage, access, and analytics data platform solutions in a secure, stable, and scalable way
  • Defines database back-up, recovery, and archiving strategy
  • Creates functional and technical documentation supporting best practices. Advises junior engineers and technologists
     

Required qualifications, capabilities, and skills

  • Formal training or certification on computer science concepts or equivalent and 5+ years applied experience
  • Hands-on experience building and operating batch and streaming data pipelines at scale
  • Experience with Apache Iceberg and modern table formats in Lakehouse environment
  • Strong proficiency with Databricks, Snowflake, Amazon Redshift, and AWS data services such as Glue and Lake Formation
  • Experience implementing data lineage, data quality, and data observability frameworks
  • Working experience with both relational and NoSQL databases
  • Advanced understanding of database back-up, recovery, and archiving strategy
  • Experience presenting and delivering visual data

     

Drive significant business impact and tackle a diverse array of challenges that span multiple technologies and applications

Sr Lead Software Engineer (Data Platform)

at J.P. Morgan

Back to all Data Engineering jobs
J.P. Morgan logo
Industry not specified

Sr Lead Software Engineer (Data Platform)

at J.P. Morgan

Mid LevelNo visa sponsorshipData Engineering

Posted 17 hours ago

No clicks

Compensation
Not specified USD

Currency: $ (USD)

City
Houston
Country
United States

As a Sr Lead Software Engineer within JPMorgan Chase's Commercial & Investment Bank - Digital Client Relationship team, you will drive data collection, storage, access, and analytics in a secure, scalable way. You will design and build hybrid on-prem and public cloud data platform solutions and end-to-end data pipelines for ingestion, transformation, and distribution, supporting both batch and streaming workloads. You will own reusable data products, implement modern data lake/lakehouse architectures (including Apache Iceberg), and enable interoperability across Databricks, Snowflake, Redshift, AWS Glue, and Lake Formation. You will establish end-to-end data lineage, monitor data quality with Great Expectations, provide guidance to junior engineers, and contribute to data governance and disaster recovery planning.

Location: Houston, TX, United States

 

Be an integral part of an Agile team that's constantly pushing the envelope to enhance, build, and deliver top-notch technology products.

As a Senior Lead Software Engineer at JPMorgan Chase, within the Commercial & Investment Bank - Digital Client Relationship team, you are an integral part of an agile team that works to enhance, build, and deliver data collection, storage, access, and analytics in a secure, stable, and scalable way. You drive significant business impact through your capabilities and contributions and apply deep technical expertise and problem-solving methodologies to tackle a diverse array of challenges that span multiple data pipelines, data architectures and other data consumers

Job Responsibilities

  • Designs and builds hybrid on-prem and public cloud data platform solutions
  • Designs and builds end-to-end data pipelines for ingestion, transformation, and distribution, supporting both batch and streaming workloads
  • Develops and owns data products that are reusable, well-documented, and optimized for analytics, BI, and AI/ML consumers
  • Implements and manages modern data lake and Lakehouse architectures, including Apache Iceberg table formats
  • Implements interoperability across data platforms and tools, including Databricks, Snowflake, Amazon Redshift, AWS Glue, and Lake Formation
  • Establishes and maintains end-to-end data lineage to support observability, impact analysis, and regulatory requirements
  • Implements data quality validation and monitoring using frameworks such as Great Expectations
  • Provides recommendations and insight on data management and governance procedures and intricacies applicable to the acquisition, maintenance, validation, and utilization of data
  • Designs and delivers trusted data collection, storage, access, and analytics data platform solutions in a secure, stable, and scalable way
  • Defines database back-up, recovery, and archiving strategy
  • Creates functional and technical documentation supporting best practices. Advises junior engineers and technologists
     

Required qualifications, capabilities, and skills

  • Formal training or certification on computer science concepts or equivalent and 5+ years applied experience
  • Hands-on experience building and operating batch and streaming data pipelines at scale
  • Experience with Apache Iceberg and modern table formats in Lakehouse environment
  • Strong proficiency with Databricks, Snowflake, Amazon Redshift, and AWS data services such as Glue and Lake Formation
  • Experience implementing data lineage, data quality, and data observability frameworks
  • Working experience with both relational and NoSQL databases
  • Advanced understanding of database back-up, recovery, and archiving strategy
  • Experience presenting and delivering visual data

     

Drive significant business impact and tackle a diverse array of challenges that span multiple technologies and applications

SIMILAR OPPORTUNITIES

No similar jobs available at the moment.