LOG IN
SIGN UP
Tech Job Finder - Find Software, Technology Sales and Product Manager Jobs.
Sign In
OR continue with e-mail and password
E-mail address
Password
Don't have an account?
Reset password
Join Tech Job Finder
OR continue with e-mail and password
E-mail address
First name
Last name
Username
Password
Confirm Password
How did you hear about us?
By signing up, you agree to our Terms & Conditions and Privacy Policy.

Software Engineer II - Data Engineer - Spark, Python, Databricks or AWS EMR

at J.P. Morgan

Back to all Data Engineering jobs
J.P. Morgan logo
Bulge Bracket Investment Banks

Software Engineer II - Data Engineer - Spark, Python, Databricks or AWS EMR

at J.P. Morgan

JuniorNo visa sponsorshipData Engineering

Posted 6 days ago

No clicks

Compensation
Not specified

Currency: Not specified

City
Bengaluru
Country
India

Join JPMorgan Chase as a Software Engineer II – Data Engineer focusing on Spark, Python, and data tooling on Databricks or AWS EMR. You will design, develop, and maintain scalable data pipelines and ETL processes, work with large datasets, and write SQL queries for data extraction and analysis. You will implement data processing workflows on AWS services (S3, ECS, Lambda, EMR, Glue) and develop Python scripts for automation, while ensuring data quality, security, and reliability. This role is based in Bengaluru, India, and operates within an agile team delivering secure, scalable data products.

Location: Bengaluru, Karnataka, India

You’re ready to gain the skills and experience needed to grow within your role and advance your career — and we have the perfect software engineering opportunity for you.

 

As a Software Engineer II - Data Engineer - Spark, Python, Databricks or AWS EMR at JPMorgan Chase within the Commercial & Investment Bank, you'll be a part of an agile team that works to enhance, design, and deliver the software components of the firm’s state-of-the-art technology products in a secure, stable, and scalable way. As an emerging member of a software engineering team, you execute software solutions through the design, development, and technical troubleshooting of multiple components within a technical product, application, or system, while gaining the skills and experience needed to grow within your role.

 

Job responsibilities

 

  • Design, develop, and maintain scalable data pipelines and ETL processes.
  • Work with large datasets using Spark on Databricks or AWS EMR.
  • Write efficient SQL queries for data extraction, transformation, and analysis.
  • Collaborate with data scientists, analysts, and other engineering teams to deliver high-quality data solutions.
  • Implement data processing workflows on AWS services such as S3, ECS, Lambda, EMR, and Glue.
  • Develop and maintain Python scripts for data processing and automation.
  • Ensure data quality, integrity, and security across all data engineering activities.
  • Troubleshoot and resolve data-related issues in a timely manner.

 

Required qualifications, capabilities, and skills

 

  • Formal training or certification on software engineering concepts and 2+ years applied experience
  • Proven expertise in Data Engineering with Spark.
  • Hands-on experience with Databricks or AWS EMR.
  • Strong knowledge of SQL and database concepts.
  • Experience in ETL and data processing workflows.
  • Proficiency in AWS services: S3, ECS, Lambda, EMR/Glue.
  • Advanced skills in Python programming.
  • Excellent problem-solving and analytical abilities.
  • Bachelor’s degree in Computer Science, Information Technology, or related field (or equivalent experience).
  • Strong communication and collaboration skills.
  • Ability to work independently and as part of a team.

 

 

Preferred qualifications, capabilities, and skills

  • Experience with Infrastructure as Code (IaaC) using Terraform or CloudFormation.
  • Familiarity with writing unit test cases for Python code.
  • Knowledge of version control systems such as BitBucket or GitHub.
  • Understanding of CI/CD pipelines and automation tools.

 

Serve as an emerging member of an agile team to design and deliver market-leading technology products in a secure and scalable way

Software Engineer II - Data Engineer - Spark, Python, Databricks or AWS EMR

at J.P. Morgan

Back to all Data Engineering jobs
J.P. Morgan logo
Bulge Bracket Investment Banks

Software Engineer II - Data Engineer - Spark, Python, Databricks or AWS EMR

at J.P. Morgan

JuniorNo visa sponsorshipData Engineering

Posted 6 days ago

No clicks

Compensation
Not specified

Currency: Not specified

City
Bengaluru
Country
India

Join JPMorgan Chase as a Software Engineer II – Data Engineer focusing on Spark, Python, and data tooling on Databricks or AWS EMR. You will design, develop, and maintain scalable data pipelines and ETL processes, work with large datasets, and write SQL queries for data extraction and analysis. You will implement data processing workflows on AWS services (S3, ECS, Lambda, EMR, Glue) and develop Python scripts for automation, while ensuring data quality, security, and reliability. This role is based in Bengaluru, India, and operates within an agile team delivering secure, scalable data products.

Location: Bengaluru, Karnataka, India

You’re ready to gain the skills and experience needed to grow within your role and advance your career — and we have the perfect software engineering opportunity for you.

 

As a Software Engineer II - Data Engineer - Spark, Python, Databricks or AWS EMR at JPMorgan Chase within the Commercial & Investment Bank, you'll be a part of an agile team that works to enhance, design, and deliver the software components of the firm’s state-of-the-art technology products in a secure, stable, and scalable way. As an emerging member of a software engineering team, you execute software solutions through the design, development, and technical troubleshooting of multiple components within a technical product, application, or system, while gaining the skills and experience needed to grow within your role.

 

Job responsibilities

 

  • Design, develop, and maintain scalable data pipelines and ETL processes.
  • Work with large datasets using Spark on Databricks or AWS EMR.
  • Write efficient SQL queries for data extraction, transformation, and analysis.
  • Collaborate with data scientists, analysts, and other engineering teams to deliver high-quality data solutions.
  • Implement data processing workflows on AWS services such as S3, ECS, Lambda, EMR, and Glue.
  • Develop and maintain Python scripts for data processing and automation.
  • Ensure data quality, integrity, and security across all data engineering activities.
  • Troubleshoot and resolve data-related issues in a timely manner.

 

Required qualifications, capabilities, and skills

 

  • Formal training or certification on software engineering concepts and 2+ years applied experience
  • Proven expertise in Data Engineering with Spark.
  • Hands-on experience with Databricks or AWS EMR.
  • Strong knowledge of SQL and database concepts.
  • Experience in ETL and data processing workflows.
  • Proficiency in AWS services: S3, ECS, Lambda, EMR/Glue.
  • Advanced skills in Python programming.
  • Excellent problem-solving and analytical abilities.
  • Bachelor’s degree in Computer Science, Information Technology, or related field (or equivalent experience).
  • Strong communication and collaboration skills.
  • Ability to work independently and as part of a team.

 

 

Preferred qualifications, capabilities, and skills

  • Experience with Infrastructure as Code (IaaC) using Terraform or CloudFormation.
  • Familiarity with writing unit test cases for Python code.
  • Knowledge of version control systems such as BitBucket or GitHub.
  • Understanding of CI/CD pipelines and automation tools.

 

Serve as an emerging member of an agile team to design and deliver market-leading technology products in a secure and scalable way

SIMILAR OPPORTUNITIES

No similar jobs available at the moment.