LOG IN
SIGN UP
Tech Job Finder - Find Software, Technology Sales and Product Manager Jobs.
Sign In
OR continue with e-mail and password
E-mail address
Password
Don't have an account?
Reset password
Join Tech Job Finder
OR continue with e-mail and password
E-mail address
First name
Last name
Username
Password
Confirm Password
How did you hear about us?
By signing up, you agree to our Terms & Conditions and Privacy Policy.

Principle Data Engineer

at J.P. Morgan

Back to all Data Engineering jobs
J.P. Morgan logo
Bulge Bracket Investment Banks

Principle Data Engineer

at J.P. Morgan

Tech LeadNo visa sponsorshipData Engineering

Posted a month ago

No clicks

Compensation
Not specified

Currency: Not specified

City
Mumbai
Country
India

Senior data engineering role responsible for architecting and delivering hybrid on-prem and cloud data platform solutions, including data lakes and lakehouses. You'll design and build end-to-end batch and streaming pipelines, implement data lineage, quality, governance, and fine-grained access controls, and optimize platforms for performance and cost. The role requires technical leadership, collaboration with product and analytics teams, and the ability to influence architecture standards across the firm.

Location: Mumbai, Maharashtra, India

Join our innovative team and shape the future of software development.

As a Principal Data Engineer at JPMorgan Chase, you provide expertise and data engineering excellence as an integral part of an agile team to enhance, build and deliver data collection, storage, access, and analytic solutions in a secure, stable, and scalable way. You leverage your advanced technical capabilities and collaborate with colleagues across the organization to drive best-in-class outcomes across various data pipelines and data architectures to support one or more of the firm’s portfolios.

Job Responsibilities

  • Architects hybrid on-prem and public cloud data platform solutions
  • Designs and builds end-to-end data pipelines for ingestion, transformation, and distribution, supporting both batch and streaming workloads
  • Develops and owns data products that are reusable, well-documented, and optimized for analytics, BI, and AI/ML consumers. Implements and manages modern data lake and lakehouse architectures, including Apache Iceberg table formats
  • Implements interoperability across data platforms and tools, including Databricks, Snowflake, Amazon Redshift, AWS Glue, and Lake Formation
  • Establishes and maintains end-to-end data lineage to support observability, impact analysis, and regulatory requirements
  • Defines and enforces data quality standards, implementing automated validation and monitoring using frameworks such as Great Expectations
  • Partners with governance, risk, and compliance teams to ensure adherence to firmwide data governance, retention, and regulatory policies
  • Designs and implements fine-grained data access controls and entitlements, leveraging tools such as Immuta
  • Optimizes data platforms for performance, scalability, cost efficiency, and reliability. Collaborates closely with product managers, analytics teams, and platform engineers to align data solutions with business needs
  • Provides technical leadership through architecture reviews, code reviews, and design guidance across teams
  • Acts on previously identified opportunities to converge physical, IT, and data security architecture to manage access​. Assists in analyzing critical trends and insights from visualizations and evaluates and selects data visualization tools across firm​

Required qualifications, capabilities, and skills

  • Formal training or certification on Machine Learning concepts and 10+ years applied experience. In addition, 5+ years of experience leading technologists to manage, anticipate and solve complex technical items within your domain of expertise
  • Hands-on experience building and operating batch and streaming data pipelines at scale
  • Experience with Apache Iceberg and modern table formats in lakehouse environment
  • Strong proficiency with Databricks, Snowflake, Amazon Redshift, and AWS data services such as Glue and Lake Formation
  • Experience implementing data lineage, data quality, and data observability frameworks
  • Proven experience with data governance and entitlement platforms (e.g., Immuta)
  • Strong understanding of secure data access patterns in large, regulated environments
  • Ability to influence data architecture standards and best practices across multiple teams
  • Familiarity w/ sell-side Markets business. Experience applying expertise and new methods to determine solutions for complex technology problems in one or more technical disciplines
  • Ability to present and effectively communicate with Senior Leaders and Executives
  • Shows a proficient understanding of existing data management systems and continuous learning to understand new data management systems

 

Provide data engineering expertise to enhance, build, and deliver data collection, storage, access, and analytic solutions

Principle Data Engineer

at J.P. Morgan

Back to all Data Engineering jobs
J.P. Morgan logo
Bulge Bracket Investment Banks

Principle Data Engineer

at J.P. Morgan

Tech LeadNo visa sponsorshipData Engineering

Posted a month ago

No clicks

Compensation
Not specified

Currency: Not specified

City
Mumbai
Country
India

Senior data engineering role responsible for architecting and delivering hybrid on-prem and cloud data platform solutions, including data lakes and lakehouses. You'll design and build end-to-end batch and streaming pipelines, implement data lineage, quality, governance, and fine-grained access controls, and optimize platforms for performance and cost. The role requires technical leadership, collaboration with product and analytics teams, and the ability to influence architecture standards across the firm.

Location: Mumbai, Maharashtra, India

Join our innovative team and shape the future of software development.

As a Principal Data Engineer at JPMorgan Chase, you provide expertise and data engineering excellence as an integral part of an agile team to enhance, build and deliver data collection, storage, access, and analytic solutions in a secure, stable, and scalable way. You leverage your advanced technical capabilities and collaborate with colleagues across the organization to drive best-in-class outcomes across various data pipelines and data architectures to support one or more of the firm’s portfolios.

Job Responsibilities

  • Architects hybrid on-prem and public cloud data platform solutions
  • Designs and builds end-to-end data pipelines for ingestion, transformation, and distribution, supporting both batch and streaming workloads
  • Develops and owns data products that are reusable, well-documented, and optimized for analytics, BI, and AI/ML consumers. Implements and manages modern data lake and lakehouse architectures, including Apache Iceberg table formats
  • Implements interoperability across data platforms and tools, including Databricks, Snowflake, Amazon Redshift, AWS Glue, and Lake Formation
  • Establishes and maintains end-to-end data lineage to support observability, impact analysis, and regulatory requirements
  • Defines and enforces data quality standards, implementing automated validation and monitoring using frameworks such as Great Expectations
  • Partners with governance, risk, and compliance teams to ensure adherence to firmwide data governance, retention, and regulatory policies
  • Designs and implements fine-grained data access controls and entitlements, leveraging tools such as Immuta
  • Optimizes data platforms for performance, scalability, cost efficiency, and reliability. Collaborates closely with product managers, analytics teams, and platform engineers to align data solutions with business needs
  • Provides technical leadership through architecture reviews, code reviews, and design guidance across teams
  • Acts on previously identified opportunities to converge physical, IT, and data security architecture to manage access​. Assists in analyzing critical trends and insights from visualizations and evaluates and selects data visualization tools across firm​

Required qualifications, capabilities, and skills

  • Formal training or certification on Machine Learning concepts and 10+ years applied experience. In addition, 5+ years of experience leading technologists to manage, anticipate and solve complex technical items within your domain of expertise
  • Hands-on experience building and operating batch and streaming data pipelines at scale
  • Experience with Apache Iceberg and modern table formats in lakehouse environment
  • Strong proficiency with Databricks, Snowflake, Amazon Redshift, and AWS data services such as Glue and Lake Formation
  • Experience implementing data lineage, data quality, and data observability frameworks
  • Proven experience with data governance and entitlement platforms (e.g., Immuta)
  • Strong understanding of secure data access patterns in large, regulated environments
  • Ability to influence data architecture standards and best practices across multiple teams
  • Familiarity w/ sell-side Markets business. Experience applying expertise and new methods to determine solutions for complex technology problems in one or more technical disciplines
  • Ability to present and effectively communicate with Senior Leaders and Executives
  • Shows a proficient understanding of existing data management systems and continuous learning to understand new data management systems

 

Provide data engineering expertise to enhance, build, and deliver data collection, storage, access, and analytic solutions