LOG IN
SIGN UP
Tech Job Finder - Find Software, Technology Sales and Product Manager Jobs.
Sign In
OR continue with e-mail and password
E-mail address
Password
Don't have an account?
Reset password
Join Tech Job Finder
OR continue with e-mail and password
E-mail address
First name
Last name
Username
Password
Confirm Password
How did you hear about us?
By signing up, you agree to our Terms & Conditions and Privacy Policy.

Sr Lead Data Engineer - Databricks, Snowflake, Redshift, AWS

at J.P. Morgan

Back to all Data Engineering jobs
J.P. Morgan logo
Bulge Bracket Investment Banks

Sr Lead Data Engineer - Databricks, Snowflake, Redshift, AWS

at J.P. Morgan

Tech LeadNo visa sponsorshipData Engineering

Posted 18 days ago

No clicks

Compensation
Not specified

Currency: Not specified

City
Mumbai
Country
India

Senior Lead Data Engineer on the Markets Tech team responsible for designing and building hybrid on-prem and cloud data platform solutions. You will develop end-to-end batch and streaming data pipelines, implement modern lakehouse architectures (including Apache Iceberg), and enable interoperability across Databricks, Snowflake, Redshift and AWS data services. The role includes establishing data lineage, quality, observability, and governance, and delivering reusable data products optimized for analytics and ML consumers. You will also define backup/recovery strategies and produce technical documentation and access control processes.

Location: Mumbai, Maharashtra, India

Embrace this pivotal role as an essential member of a high performing team dedicated to reaching new heights in data engineering. Your contributions will be instrumental in shaping the future of one of the world’s largest and most influential companies. 

As a Senior Lead Data Engineer at JPMorgan Chase within the Commercial and Investment Bank's Markets Tech Team, you are an integral part of an agile team that works to enhance, build, and deliver data collection, storage, access, and analytics in a secure, stable, and scalable way. Leverage your deep technical expertise and problem solving capabilities to drive significant business impact and tackle a diverse array of challenges that span multiple data pipelines, data architectures, and other data consumers.

 

Job responsibilities

  • Design and build hybrid on-prem and public cloud data platform solutions
  • Design and build end-to-end data pipelines for ingestion, transformation, and distribution, supporting both batch and streaming workloads
  • Develop and own data products that are reusable, well-documented, and optimized for analytics, BI, and AI/ML consumers
  • Implement and manage modern data lake and Lakehouse architectures, including Apache Iceberg table formats
  • Implement interoperability across data platforms and tools, including Databricks, Snowflake, Amazon Redshift, AWS Glue, and Lake Formation
  • Establish and maintain end-to-end data lineage to support observability, impact analysis, and regulatory requirements
  • Implement data quality validation and monitoring using frameworks such as Great Expectations
  • Provide recommendations and insight on data management and governance procedures and intricacies applicable to the acquisition, maintenance, validation, and utilization of data, advises junior engineers and technologists.
  • Design and deliver trusted data collection, storage, access, and analytics data platform solutions in a secure, stable, and scalable way
  • Define database back-up, recovery, and archiving strategy and approves data analysis tools and processes
  • Create functional and technical documentation supporting best practices, evaluate and report on access control processes to determine effectiveness of data asset security

 

Required qualifications, capabilities, and skills

  • Formal training or certification on computer science concepts or equivalent and 5+ years applied experience
  • Hands-on experience building and operating batch and streaming data pipelines at scale
  • Experience with Apache Iceberg and modern table formats in Lakehouse environment
  • Strong proficiency with Databricks, Snowflake, Amazon Redshift, and AWS data services such as Glue and Lake Formation
  • Experience implementing data lineage, data quality, and data observability frameworks
  • Working experience with both relational and NoSQL databases
  • Advanced understanding of database back-up, recovery, and archiving strategy
  • Experience presenting and delivering visual data
 

 

Drive business impact and tackle data engineering challenges that span multiple data pipelines, architectures, and consumers

Sr Lead Data Engineer - Databricks, Snowflake, Redshift, AWS

at J.P. Morgan

Back to all Data Engineering jobs
J.P. Morgan logo
Bulge Bracket Investment Banks

Sr Lead Data Engineer - Databricks, Snowflake, Redshift, AWS

at J.P. Morgan

Tech LeadNo visa sponsorshipData Engineering

Posted 18 days ago

No clicks

Compensation
Not specified

Currency: Not specified

City
Mumbai
Country
India

Senior Lead Data Engineer on the Markets Tech team responsible for designing and building hybrid on-prem and cloud data platform solutions. You will develop end-to-end batch and streaming data pipelines, implement modern lakehouse architectures (including Apache Iceberg), and enable interoperability across Databricks, Snowflake, Redshift and AWS data services. The role includes establishing data lineage, quality, observability, and governance, and delivering reusable data products optimized for analytics and ML consumers. You will also define backup/recovery strategies and produce technical documentation and access control processes.

Location: Mumbai, Maharashtra, India

Embrace this pivotal role as an essential member of a high performing team dedicated to reaching new heights in data engineering. Your contributions will be instrumental in shaping the future of one of the world’s largest and most influential companies. 

As a Senior Lead Data Engineer at JPMorgan Chase within the Commercial and Investment Bank's Markets Tech Team, you are an integral part of an agile team that works to enhance, build, and deliver data collection, storage, access, and analytics in a secure, stable, and scalable way. Leverage your deep technical expertise and problem solving capabilities to drive significant business impact and tackle a diverse array of challenges that span multiple data pipelines, data architectures, and other data consumers.

 

Job responsibilities

  • Design and build hybrid on-prem and public cloud data platform solutions
  • Design and build end-to-end data pipelines for ingestion, transformation, and distribution, supporting both batch and streaming workloads
  • Develop and own data products that are reusable, well-documented, and optimized for analytics, BI, and AI/ML consumers
  • Implement and manage modern data lake and Lakehouse architectures, including Apache Iceberg table formats
  • Implement interoperability across data platforms and tools, including Databricks, Snowflake, Amazon Redshift, AWS Glue, and Lake Formation
  • Establish and maintain end-to-end data lineage to support observability, impact analysis, and regulatory requirements
  • Implement data quality validation and monitoring using frameworks such as Great Expectations
  • Provide recommendations and insight on data management and governance procedures and intricacies applicable to the acquisition, maintenance, validation, and utilization of data, advises junior engineers and technologists.
  • Design and deliver trusted data collection, storage, access, and analytics data platform solutions in a secure, stable, and scalable way
  • Define database back-up, recovery, and archiving strategy and approves data analysis tools and processes
  • Create functional and technical documentation supporting best practices, evaluate and report on access control processes to determine effectiveness of data asset security

 

Required qualifications, capabilities, and skills

  • Formal training or certification on computer science concepts or equivalent and 5+ years applied experience
  • Hands-on experience building and operating batch and streaming data pipelines at scale
  • Experience with Apache Iceberg and modern table formats in Lakehouse environment
  • Strong proficiency with Databricks, Snowflake, Amazon Redshift, and AWS data services such as Glue and Lake Formation
  • Experience implementing data lineage, data quality, and data observability frameworks
  • Working experience with both relational and NoSQL databases
  • Advanced understanding of database back-up, recovery, and archiving strategy
  • Experience presenting and delivering visual data
 

 

Drive business impact and tackle data engineering challenges that span multiple data pipelines, architectures, and consumers