LOG IN
SIGN UP
Tech Job Finder - Find Software, Technology Sales and Product Manager Jobs.
Sign In
OR continue with e-mail and password
E-mail address
Password
Don't have an account?
Reset password
Join Tech Job Finder
OR continue with e-mail and password
E-mail address
First name
Last name
Username
Password
Confirm Password
How did you hear about us?
By signing up, you agree to our Terms & Conditions and Privacy Policy.

Data Engineer II

at J.P. Morgan

Back to all Data Engineering jobs
J.P. Morgan logo
Bulge Bracket Investment Banks

Data Engineer II

at J.P. Morgan

Mid LevelNo visa sponsorshipData Engineering

Posted a month ago

No clicks

Compensation
Not specified

Currency: Not specified

City
Pune
Country
India

JPMorgan Chase is hiring a Data Engineer II for the Connected Commerce Travel Technology team to design, build, and maintain large-scale cloud-based data integration and analytical solutions. The role focuses on developing scalable data pipelines and models, optimizing storage and processing for high-volume datasets, and ensuring data integrity and quality. You will collaborate with cross-functional teams in an Agile environment and apply TDD/CI-CD practices to deliver secure, stable, and scalable solutions. Preferred technologies include Python/PySpark, Spark, cloud lakehouse services (AWS/Databricks), Airflow, Terraform, and Apache Iceberg.

Location: Pune, Maharashtra, India

We have an exciting and rewarding opportunity for you to take your Data Engineer career to the next level. 
As a Data Engineer II at JPMorganChase within Consumer & Community Banking in Connected Commerce Travel Technology Team, you will be the part of an agile team that designs, builds and maintains cutting-edge large-scale data integration and analytical solutions on the cloud in a secure, stable, and scalable way.  In this role, you’ll leverage your technical expertise and business acumen to transform complex, high-volume data into powerful, actionable insights, driving strategic value for our stakeholders. This is an exciting opportunity to shape the future of how Chase Travel data is managed for analytical needs.
 

Job responsibilities

  • Design, develop and maintain scalable and large-scale data processing pipelines and infrastructure on the cloud following engineering standards, governance standards and technology best practices
  • Develop and optimize data models for large-scale datasets, ensuring efficient storage, retrieval, and analytics while maintaining data integrity and quality.
  • Collaborate with cross-functional teams to translate business requirements into scalable and effective data engineering solutions.
  • Demonstrate a passion for innovation and continuous improvement in data engineering, proactively identifying opportunities to enhance data infrastructure, data processing and analytics capabilities.

 

Required qualifications, capabilities, and skills

  • Strong analytical problem solving and critical thinking skills
  • Proficiency in at least one programming language ( Python, if not Java or Scala)
  • Proficiency in at least one distributed data processing framework ( Spark or similar)
  • Proficiency in at least one cloud data Lakehouse platforms (AWS Data lake services or Databricks, alternatively Hadoop),
  • Proficiency in at least one scheduling/orchestration tools ( Airflow, if not AWS Step Functions or similar) 
  • Proficiency with relational and NoSQL databases.
  • Proficiency in data structures, data serialization formats (JSON, AVRO, Protobuf, or similar), and big-data storage formats (Parquet, Iceberg, or similar),
  • Experience working in teams following Agile methodology
  • Experience with test-driven development (TDD) or behavior-driven development (BDD) practices, as well as working with continuous integration and continuous deployment (CI/CD) tools.

 

Preferred qualifications, capabilities, and skills
  • Proficiency in Python and Pyspark
  • Proficiency in IaC (preferably Terraform, alternatively AWS cloud formation)
  • Experience with AWS Glue, AWS S3, AWS Lakehouse, AWS Athena, Airflow, Kinesis and Apache Iceberg
  • Experience working with Jenkins
Join a world class data engineering team that builds and delivers cutting-edge large-scale data integration and analytical solutions

Data Engineer II

at J.P. Morgan

Back to all Data Engineering jobs
J.P. Morgan logo
Bulge Bracket Investment Banks

Data Engineer II

at J.P. Morgan

Mid LevelNo visa sponsorshipData Engineering

Posted a month ago

No clicks

Compensation
Not specified

Currency: Not specified

City
Pune
Country
India

JPMorgan Chase is hiring a Data Engineer II for the Connected Commerce Travel Technology team to design, build, and maintain large-scale cloud-based data integration and analytical solutions. The role focuses on developing scalable data pipelines and models, optimizing storage and processing for high-volume datasets, and ensuring data integrity and quality. You will collaborate with cross-functional teams in an Agile environment and apply TDD/CI-CD practices to deliver secure, stable, and scalable solutions. Preferred technologies include Python/PySpark, Spark, cloud lakehouse services (AWS/Databricks), Airflow, Terraform, and Apache Iceberg.

Location: Pune, Maharashtra, India

We have an exciting and rewarding opportunity for you to take your Data Engineer career to the next level. 
As a Data Engineer II at JPMorganChase within Consumer & Community Banking in Connected Commerce Travel Technology Team, you will be the part of an agile team that designs, builds and maintains cutting-edge large-scale data integration and analytical solutions on the cloud in a secure, stable, and scalable way.  In this role, you’ll leverage your technical expertise and business acumen to transform complex, high-volume data into powerful, actionable insights, driving strategic value for our stakeholders. This is an exciting opportunity to shape the future of how Chase Travel data is managed for analytical needs.
 

Job responsibilities

  • Design, develop and maintain scalable and large-scale data processing pipelines and infrastructure on the cloud following engineering standards, governance standards and technology best practices
  • Develop and optimize data models for large-scale datasets, ensuring efficient storage, retrieval, and analytics while maintaining data integrity and quality.
  • Collaborate with cross-functional teams to translate business requirements into scalable and effective data engineering solutions.
  • Demonstrate a passion for innovation and continuous improvement in data engineering, proactively identifying opportunities to enhance data infrastructure, data processing and analytics capabilities.

 

Required qualifications, capabilities, and skills

  • Strong analytical problem solving and critical thinking skills
  • Proficiency in at least one programming language ( Python, if not Java or Scala)
  • Proficiency in at least one distributed data processing framework ( Spark or similar)
  • Proficiency in at least one cloud data Lakehouse platforms (AWS Data lake services or Databricks, alternatively Hadoop),
  • Proficiency in at least one scheduling/orchestration tools ( Airflow, if not AWS Step Functions or similar) 
  • Proficiency with relational and NoSQL databases.
  • Proficiency in data structures, data serialization formats (JSON, AVRO, Protobuf, or similar), and big-data storage formats (Parquet, Iceberg, or similar),
  • Experience working in teams following Agile methodology
  • Experience with test-driven development (TDD) or behavior-driven development (BDD) practices, as well as working with continuous integration and continuous deployment (CI/CD) tools.

 

Preferred qualifications, capabilities, and skills
  • Proficiency in Python and Pyspark
  • Proficiency in IaC (preferably Terraform, alternatively AWS cloud formation)
  • Experience with AWS Glue, AWS S3, AWS Lakehouse, AWS Athena, Airflow, Kinesis and Apache Iceberg
  • Experience working with Jenkins
Join a world class data engineering team that builds and delivers cutting-edge large-scale data integration and analytical solutions