LOG IN
SIGN UP
Tech Job Finder - Find Software, Technology Sales and Product Manager Jobs.
Sign In
OR continue with e-mail and password
E-mail address
Password
Don't have an account?
Reset password
Join Tech Job Finder
OR continue with e-mail and password
E-mail address
Username
Password
Confirm Password
How did you hear about us?
By signing up, you agree to our Terms & Conditions and Privacy Policy.

Python Data Engineer II

at J.P. Morgan

Back to all Data Engineering jobs
J.P. Morgan logo
Bulge Bracket Investment Banks

Python Data Engineer II

at J.P. Morgan

JuniorNo visa sponsorshipData Engineering

Posted 16 hours ago

No clicks

Compensation
Not specified

Currency: Not specified

City
Jersey City
Country
United States

Join JPMorgan Chase as a Python Data Engineer II on an agile team within Consumer & Community Banking to design, build, and maintain data collection, storage, access, and analytics solutions. You will develop ETL and data pipelines using Python and PySpark on AWS (and integrations like Kafka, S3, Snowflake), deliver secure production code, and perform performance tuning to eliminate bottlenecks. The role requires hands-on system design, debugging, data modeling, and working with big-data formats and serialization standards while following CI/CD, TDD/BDD, and agile practices.

Location: Jersey City, NJ, United States

You thrive on diversity and creativity, and we welcome individuals who share our vision of making a lasting impact. Your unique combination of design thinking and experience will help us achieve new heights.


As a Data Engineer II at JPMorganChase within the Consumer & Community Banking, you are part of an agile team that works to enhance, design, and deliver the data collection, storage, access, and analytics solutions in a secure, stable, and scalable way. As an emerging member of a data engineering team, you execute data solutions through the design, development, and technical troubleshooting of multiple components within a technical product, application, or system, while gaining the skills and experience needed to grow within your role.

 

 

Job responsibilities

  • Executes software solutions, design, development, and technical troubleshooting with ability to think beyond routine or conventional approaches to build solutions or break down technical problems
  • Creates secure and high-quality production code and maintains algorithms that run synchronously with appropriate systems
  • Produces architecture and design artifacts for complex applications while being accountable for ensuring design constraints are met by software code development
  • Gathers, analyzes, synthesizes, and develops visualizations and reporting from large, diverse data sets in service of continuous improvement of software applications and systems
  • Proactively identifies hidden problems, patterns in data, and uses these insights to drive improvements to coding hygiene and system architecture
  • Contributes to software engineering communities of practice and events that explore new and emerging technologies
  • Adds to team culture of diversity, opportunity, inclusion, and respect

 

Required qualifications, capabilities, and skills

  • Formal training or certification on Software Engineering concepts and 2+ years applied experience 
  • Experience in ETL process/Advance concepts 
  • Hands-on practical experience in system design, application development, testing, and operational stability
  • Experience in AWS, design, implementation, and maintenance of data pipelines using Python and PySpark
  • Experience in developing, debugging, and maintaining code in a large corporate environment with one or more modern programming languages and database querying languages
  • Proven experience in performance and tuning to ensure jobs are running at optimal levels and no performance bottleneck
  • Solid understanding of agile methodologies such as CI/CD, Application Resiliency, and Security
  • Demonstrated knowledge of software applications and technical processes within a technical discipline (e.g., cloud, artificial intelligence, machine learning, mobile, etc.)
  • Proficiency in Unix scripting, data structures, data serialization formats such as JSON, AVRO, Protobuf, or similar, big-data storage formats such as Parquet, Iceberg, or similar, data processing methodologies such as batch, micro-batching, or stream, one or more data modelling techniques such as Dimensional, Data Vault, Kimball, Inmon, etc., Agile methodology, TDD or BDD and CI/CD tools
 
 
Preferred qualifications, capabilities, and skills
  • Python Advance development skills  / Kafka & S3 integration in Performance optimization
  • Experience in carrying out data analysis to support business insights
  • Strong in PySpark, AWS, & Snowflake

 
Be part of an agile team that works to enhance, design, and deliver data collection, storage, access, and analytics solutions

Python Data Engineer II

at J.P. Morgan

Back to all Data Engineering jobs
J.P. Morgan logo
Bulge Bracket Investment Banks

Python Data Engineer II

at J.P. Morgan

JuniorNo visa sponsorshipData Engineering

Posted 16 hours ago

No clicks

Compensation
Not specified

Currency: Not specified

City
Jersey City
Country
United States

Join JPMorgan Chase as a Python Data Engineer II on an agile team within Consumer & Community Banking to design, build, and maintain data collection, storage, access, and analytics solutions. You will develop ETL and data pipelines using Python and PySpark on AWS (and integrations like Kafka, S3, Snowflake), deliver secure production code, and perform performance tuning to eliminate bottlenecks. The role requires hands-on system design, debugging, data modeling, and working with big-data formats and serialization standards while following CI/CD, TDD/BDD, and agile practices.

Location: Jersey City, NJ, United States

You thrive on diversity and creativity, and we welcome individuals who share our vision of making a lasting impact. Your unique combination of design thinking and experience will help us achieve new heights.


As a Data Engineer II at JPMorganChase within the Consumer & Community Banking, you are part of an agile team that works to enhance, design, and deliver the data collection, storage, access, and analytics solutions in a secure, stable, and scalable way. As an emerging member of a data engineering team, you execute data solutions through the design, development, and technical troubleshooting of multiple components within a technical product, application, or system, while gaining the skills and experience needed to grow within your role.

 

 

Job responsibilities

  • Executes software solutions, design, development, and technical troubleshooting with ability to think beyond routine or conventional approaches to build solutions or break down technical problems
  • Creates secure and high-quality production code and maintains algorithms that run synchronously with appropriate systems
  • Produces architecture and design artifacts for complex applications while being accountable for ensuring design constraints are met by software code development
  • Gathers, analyzes, synthesizes, and develops visualizations and reporting from large, diverse data sets in service of continuous improvement of software applications and systems
  • Proactively identifies hidden problems, patterns in data, and uses these insights to drive improvements to coding hygiene and system architecture
  • Contributes to software engineering communities of practice and events that explore new and emerging technologies
  • Adds to team culture of diversity, opportunity, inclusion, and respect

 

Required qualifications, capabilities, and skills

  • Formal training or certification on Software Engineering concepts and 2+ years applied experience 
  • Experience in ETL process/Advance concepts 
  • Hands-on practical experience in system design, application development, testing, and operational stability
  • Experience in AWS, design, implementation, and maintenance of data pipelines using Python and PySpark
  • Experience in developing, debugging, and maintaining code in a large corporate environment with one or more modern programming languages and database querying languages
  • Proven experience in performance and tuning to ensure jobs are running at optimal levels and no performance bottleneck
  • Solid understanding of agile methodologies such as CI/CD, Application Resiliency, and Security
  • Demonstrated knowledge of software applications and technical processes within a technical discipline (e.g., cloud, artificial intelligence, machine learning, mobile, etc.)
  • Proficiency in Unix scripting, data structures, data serialization formats such as JSON, AVRO, Protobuf, or similar, big-data storage formats such as Parquet, Iceberg, or similar, data processing methodologies such as batch, micro-batching, or stream, one or more data modelling techniques such as Dimensional, Data Vault, Kimball, Inmon, etc., Agile methodology, TDD or BDD and CI/CD tools
 
 
Preferred qualifications, capabilities, and skills
  • Python Advance development skills  / Kafka & S3 integration in Performance optimization
  • Experience in carrying out data analysis to support business insights
  • Strong in PySpark, AWS, & Snowflake

 
Be part of an agile team that works to enhance, design, and deliver data collection, storage, access, and analytics solutions