LOG IN
SIGN UP
Tech Job Finder - Find Software, Technology Sales and Product Manager Jobs.
Sign In
OR continue with e-mail and password
E-mail address
Password
Don't have an account?
Reset password
Join Tech Job Finder
OR continue with e-mail and password
E-mail address
First name
Last name
Username
Password
Confirm Password
How did you hear about us?
By signing up, you agree to our Terms & Conditions and Privacy Policy.

Software Engineer II - Data Engineering

at J.P. Morgan

Back to all Data Engineering jobs
J.P. Morgan logo
Bulge Bracket Investment Banks

Software Engineer II - Data Engineering

at J.P. Morgan

JuniorNo visa sponsorshipData Engineering

Posted a month ago

No clicks

Compensation
Not specified

Currency: Not specified

City
Not specified
Country
India

Join JPMorgan Chase's Consumer & Community Banking Data Technology team as a Software Engineer II to design, develop, and maintain scalable, secure data engineering solutions. You'll build and optimize ETL/ELT pipelines, data models, and orchestration while applying best practices for data quality, lineage, and performance. The role involves hands-on coding in Python/PySpark, using orchestration tools (Airflow/Prefect), cloud platforms, and CI/CD to deliver stable, automated data products.

Location: India

You’re ready to gain the skills and experience needed to grow within your role and advance your career — and we have the perfect software engineering opportunity for you.

As a Software Engineer II at JPMorganChase within the Consumer and community banking- Data technology you are part of an agile team that works to enhance, design, and deliver the software components of the firm’s state-of-the-art technology products in a secure, stable, and scalable way. As an emerging member of a software engineering team, you execute software solutions through the design, development, and technical troubleshooting of multiple components within a technical product, application, or system, while gaining the skills and experience needed to grow within your role.

Job responsibilities

 

  • Executes standard software solutions, design, development, and technical troubleshooting
  • Writes secure and high-quality code using the syntax of at least one programming language with limited guidance
  • Designs, develops, codes, and troubleshoots with consideration of upstream and downstream systems and technical implications
  • Applies knowledge of tools within the Software Development Life Cycle toolchain to improve the value realized by automation
  • Applies technical troubleshooting to break down solutions and solve technical problems of basic complexity
  • Gathers, analyzes, and draws conclusions from large, diverse data sets to identify problems and contribute to decision-making in service of secure, stable application development
  • Learns and applies system processes, methodologies, and skills for the development of secure, stable code and systems
  • Adds to team culture of diversity, opportunity, inclusion, and respect

 

 

Required qualifications, capabilities, and skills

 

  • [Action Required: Insert 1st bullet according to Years of Experience table]
  • Hands-on practical experience in system design, application development, testing, and operational stability
  • Design, build, and maintain scalable ETL/ELT pipelines for batch data with ETL tools (e.g. Abinitio) and Libraries (e.g. PySpark).
  • Develop data models (dimensional, star/snowflake) and warehouse/lakehouse structures.
  • Implement orchestration and scheduling (e.g., Airflow, Prefect) with robust monitoring and alerting.
  • Ensure data quality, lineage, and governance using appropriate frameworks and catalog tools.
  • Optimize pipeline performance, storage formats (e.g., Parquet, Delta), and query execution.
  • Advanced SQL skills (query optimization, window functions, schema design), Python programming, and experience with PySpark for distributed data processing; familiar with data warehousing/lakehouse concepts and columnar formats (Parquet, ORC).
  • Proficient in workflow orchestration (Airflow, Prefect, Dagster), cloud platforms (AWS, Azure, or GCP), version control (Git), CI/CD pipelines (GitHub Actions, Azure DevOps), and containerization (Docker).
  • Knowledge of data quality and testing practices (Great Expectations, unit/integration tests), with strong problem-solving, communication, and documentation abilities.

 

 

Preferred qualifications, capabilities, and skills

 

  • Familiarity with modern front-end technologies
  • Exposure to cloud technologies

 

Serve as an emerging member of an agile team to design and deliver market-leading technology products in a secure and scalable way

Software Engineer II - Data Engineering

at J.P. Morgan

Back to all Data Engineering jobs
J.P. Morgan logo
Bulge Bracket Investment Banks

Software Engineer II - Data Engineering

at J.P. Morgan

JuniorNo visa sponsorshipData Engineering

Posted a month ago

No clicks

Compensation
Not specified

Currency: Not specified

City
Not specified
Country
India

Join JPMorgan Chase's Consumer & Community Banking Data Technology team as a Software Engineer II to design, develop, and maintain scalable, secure data engineering solutions. You'll build and optimize ETL/ELT pipelines, data models, and orchestration while applying best practices for data quality, lineage, and performance. The role involves hands-on coding in Python/PySpark, using orchestration tools (Airflow/Prefect), cloud platforms, and CI/CD to deliver stable, automated data products.

Location: India

You’re ready to gain the skills and experience needed to grow within your role and advance your career — and we have the perfect software engineering opportunity for you.

As a Software Engineer II at JPMorganChase within the Consumer and community banking- Data technology you are part of an agile team that works to enhance, design, and deliver the software components of the firm’s state-of-the-art technology products in a secure, stable, and scalable way. As an emerging member of a software engineering team, you execute software solutions through the design, development, and technical troubleshooting of multiple components within a technical product, application, or system, while gaining the skills and experience needed to grow within your role.

Job responsibilities

 

  • Executes standard software solutions, design, development, and technical troubleshooting
  • Writes secure and high-quality code using the syntax of at least one programming language with limited guidance
  • Designs, develops, codes, and troubleshoots with consideration of upstream and downstream systems and technical implications
  • Applies knowledge of tools within the Software Development Life Cycle toolchain to improve the value realized by automation
  • Applies technical troubleshooting to break down solutions and solve technical problems of basic complexity
  • Gathers, analyzes, and draws conclusions from large, diverse data sets to identify problems and contribute to decision-making in service of secure, stable application development
  • Learns and applies system processes, methodologies, and skills for the development of secure, stable code and systems
  • Adds to team culture of diversity, opportunity, inclusion, and respect

 

 

Required qualifications, capabilities, and skills

 

  • [Action Required: Insert 1st bullet according to Years of Experience table]
  • Hands-on practical experience in system design, application development, testing, and operational stability
  • Design, build, and maintain scalable ETL/ELT pipelines for batch data with ETL tools (e.g. Abinitio) and Libraries (e.g. PySpark).
  • Develop data models (dimensional, star/snowflake) and warehouse/lakehouse structures.
  • Implement orchestration and scheduling (e.g., Airflow, Prefect) with robust monitoring and alerting.
  • Ensure data quality, lineage, and governance using appropriate frameworks and catalog tools.
  • Optimize pipeline performance, storage formats (e.g., Parquet, Delta), and query execution.
  • Advanced SQL skills (query optimization, window functions, schema design), Python programming, and experience with PySpark for distributed data processing; familiar with data warehousing/lakehouse concepts and columnar formats (Parquet, ORC).
  • Proficient in workflow orchestration (Airflow, Prefect, Dagster), cloud platforms (AWS, Azure, or GCP), version control (Git), CI/CD pipelines (GitHub Actions, Azure DevOps), and containerization (Docker).
  • Knowledge of data quality and testing practices (Great Expectations, unit/integration tests), with strong problem-solving, communication, and documentation abilities.

 

 

Preferred qualifications, capabilities, and skills

 

  • Familiarity with modern front-end technologies
  • Exposure to cloud technologies

 

Serve as an emerging member of an agile team to design and deliver market-leading technology products in a secure and scalable way