LOG IN
SIGN UP
Tech Job Finder - Find Software, Technology Sales and Product Manager Jobs.
Sign In
OR continue with e-mail and password
E-mail address
Password
Don't have an account?
Reset password
Join Tech Job Finder
OR continue with e-mail and password
E-mail address
First name
Last name
Username
Password
Confirm Password
How did you hear about us?
By signing up, you agree to our Terms & Conditions and Privacy Policy.

Data Engineer III

at J.P. Morgan

Back to all Data Engineering jobs
J.P. Morgan logo
Bulge Bracket Investment Banks

Data Engineer III

at J.P. Morgan

Mid LevelNo visa sponsorshipData Engineering

Posted 18 days ago

No clicks

Compensation
Not specified

Currency: Not specified

City
Mumbai
Country
India

As a Data Engineer III at JPMorgan Chase you will design and deliver hybrid on-prem and cloud data platform solutions and build end-to-end data pipelines for both batch and streaming workloads. You will implement modern data lake/lakehouse architectures (including Apache Iceberg), ensure end-to-end data lineage and data quality (e.g., Great Expectations), and enable interoperability across tools like Databricks, Snowflake, Redshift, AWS Glue and Lake Formation. The role requires strong SQL skills, familiarity with NoSQL, coding experience in modern programming languages, and practical system design, testing, and operational stability with 3+ years of applied experience. You will produce reusable data products optimized for analytics, BI, and AI/ML consumers while supporting observability and regulatory controls.

Location: Mumbai, Maharashtra, India

We have an exciting and rewarding opportunity for you to take your software engineering career to the next level. 

As a Data Engineer III at JPMorgan Chase within the Commercial & Investment Bank, you serve as a seasoned member of an agile team to design and deliver trusted market-leading technology products in a secure, stable, and scalable way. You are responsible for carrying out critical technology solutions across multiple technical areas within various business functions in support of the firm’s business objectives.

Job Responsibilities

  • Builds hybrid on-prem and public cloud data platform solutions
  • Builds end-to-end data pipelines for ingestion, transformation, and distribution, supporting both batch and streaming workloads
  • Develops data products that are reusable, well-documented, and optimized for analytics, BI, and AI/ML consumers
  • Implements modern data lake and lakehouse architectures, including Apache Iceberg table formats
  • Implements interoperability across data platforms and tools, including Databricks, Snowflake, Amazon Redshift, AWS Glue, and Lake Formation
  • Establishes and maintains end-to-end data lineage to support observability, impact analysis, and regulatory requirements
  • Implements data quality validation and monitoring using frameworks such as Great Expectations
  • Supports review of controls to ensure sufficient protection of enterprise data
  • Updates logical or physical data models based on new use cases
  • Frequently uses SQL and understands NoSQL databases and their niche in the marketplace

 

Required qualifications, capabilities, and skills

  • Formal training or certification on software engineering concepts and 3+ years applied experience
  • Hands-on practical experience in system design, application development, testing, and operational stability
  • Proficient in coding in one or more languages
  • Experience in developing, debugging, and maintaining code in a large corporate environment with one or more modern programming languages and database querying languages
  • Experience across the data lifecycle
  • Advanced at SQL (e.g., joins and aggregations)
  • Working understanding of NoSQL databases
  • Significant experience with statistical data analysis and able to determine appropriate tools and data patterns to perform analysis
  • Experience customizing changes in a tool to generate product

 

Preferred qualifications, capabilities, and skills

  • Familiarity with modern front-end technologies

  • Exposure to cloud technologies

Develop, test, and maintain critical data pipelines and architectures across multiple technical areas

Data Engineer III

at J.P. Morgan

Back to all Data Engineering jobs
J.P. Morgan logo
Bulge Bracket Investment Banks

Data Engineer III

at J.P. Morgan

Mid LevelNo visa sponsorshipData Engineering

Posted 18 days ago

No clicks

Compensation
Not specified

Currency: Not specified

City
Mumbai
Country
India

As a Data Engineer III at JPMorgan Chase you will design and deliver hybrid on-prem and cloud data platform solutions and build end-to-end data pipelines for both batch and streaming workloads. You will implement modern data lake/lakehouse architectures (including Apache Iceberg), ensure end-to-end data lineage and data quality (e.g., Great Expectations), and enable interoperability across tools like Databricks, Snowflake, Redshift, AWS Glue and Lake Formation. The role requires strong SQL skills, familiarity with NoSQL, coding experience in modern programming languages, and practical system design, testing, and operational stability with 3+ years of applied experience. You will produce reusable data products optimized for analytics, BI, and AI/ML consumers while supporting observability and regulatory controls.

Location: Mumbai, Maharashtra, India

We have an exciting and rewarding opportunity for you to take your software engineering career to the next level. 

As a Data Engineer III at JPMorgan Chase within the Commercial & Investment Bank, you serve as a seasoned member of an agile team to design and deliver trusted market-leading technology products in a secure, stable, and scalable way. You are responsible for carrying out critical technology solutions across multiple technical areas within various business functions in support of the firm’s business objectives.

Job Responsibilities

  • Builds hybrid on-prem and public cloud data platform solutions
  • Builds end-to-end data pipelines for ingestion, transformation, and distribution, supporting both batch and streaming workloads
  • Develops data products that are reusable, well-documented, and optimized for analytics, BI, and AI/ML consumers
  • Implements modern data lake and lakehouse architectures, including Apache Iceberg table formats
  • Implements interoperability across data platforms and tools, including Databricks, Snowflake, Amazon Redshift, AWS Glue, and Lake Formation
  • Establishes and maintains end-to-end data lineage to support observability, impact analysis, and regulatory requirements
  • Implements data quality validation and monitoring using frameworks such as Great Expectations
  • Supports review of controls to ensure sufficient protection of enterprise data
  • Updates logical or physical data models based on new use cases
  • Frequently uses SQL and understands NoSQL databases and their niche in the marketplace

 

Required qualifications, capabilities, and skills

  • Formal training or certification on software engineering concepts and 3+ years applied experience
  • Hands-on practical experience in system design, application development, testing, and operational stability
  • Proficient in coding in one or more languages
  • Experience in developing, debugging, and maintaining code in a large corporate environment with one or more modern programming languages and database querying languages
  • Experience across the data lifecycle
  • Advanced at SQL (e.g., joins and aggregations)
  • Working understanding of NoSQL databases
  • Significant experience with statistical data analysis and able to determine appropriate tools and data patterns to perform analysis
  • Experience customizing changes in a tool to generate product

 

Preferred qualifications, capabilities, and skills

  • Familiarity with modern front-end technologies

  • Exposure to cloud technologies

Develop, test, and maintain critical data pipelines and architectures across multiple technical areas