LOG IN
SIGN UP
Tech Job Finder - Find Software, Technology Sales and Product Manager Jobs.
Sign In
OR continue with e-mail and password
E-mail address
Password
Don't have an account?
Reset password
Join Tech Job Finder
OR continue with e-mail and password
E-mail address
First name
Last name
Username
Password
Confirm Password
How did you hear about us?
By signing up, you agree to our Terms & Conditions and Privacy Policy.

Data Lead Software Engineer

at J.P. Morgan

Back to all Data Engineering jobs
J.P. Morgan logo
Bulge Bracket Investment Banks

Data Lead Software Engineer

at J.P. Morgan

Mid LevelNo visa sponsorshipData Engineering

Posted a month ago

No clicks

Compensation
Not specified

Currency: Not specified

City
New York City
Country
United States

Lead engineer on the Open Banking team within Consumer & Community Banking at JPMorgan Chase, responsible for designing and delivering end-to-end cloud-native data solutions. Architect, build, and maintain large-scale data pipelines and analytical systems on AWS, ensuring data quality, performance monitoring, resilience, and security. Hands-on with Python/Java, Spark, AWS data services (Lake Formation, Glue/EMR, S3, Athena, Kinesis/MSK), and modern data formats like Parquet, Iceberg, and Avro. Mentor and coach engineering teams across the full data lifecycle and service delivery.

Location: New York, NY, United States

Join us as we embark on a journey of collaboration and innovation, where your unique skills and talents will be valued and celebrated. Together we will create a brighter future and make a meaningful difference.

As a Lead  Software Engineer at JPMorgan Chase within Consumer & Community Banking  Open Banking  team, you are an integral part of an agile team that works to enhance, build, and deliver data collection, storage, access, and analytics solutions in a secure, stable, and scalable way. As a core technical contributor, you are responsible for maintaining critical data pipelines and architectures across multiple technical areas within various business functions in support of the firm’s business objectives.

Job responsibilities

  • Architect and oversee the design of complex data solutions that meet diverse business needs and customer requirements.
  • Guide the evolution of logical and physical data models to support emerging business use cases and technological advancements.
  • Build and manage end-to-end cloud-native data pipelines in AWS, leveraging your hands-on expertise with AWS components.
  • Build analytical systems from the ground up, providing architectural direction, translating business issues into specific requirements, and identifying appropriate data to support solutions.
  • Work across the Service Delivery Lifecycle on engineering major/minor enhancements and ongoing maintenance of existing applications.
  • Conduct feasibility studies, capacity planning, and process redesign/re-engineering of complex integration solutions.
  • Help others build code to extract raw data, coach the team on techniques to validate its quality, and apply your deep data knowledge to ensure the correct data is ingested across the pipeline.
  • Guide the development of data tools used to transform, manage, and access data, and advise the team on writing and validating code to test the storage and availability of data platforms for resilience.
  • Oversee the implementation of performance monitoring protocols across data pipelines, coaching the team on building visualizations and aggregations to monitor pipeline health.
  • Coach others on implementing solutions and self-healing processes that minimize points of failure across multiple product features.

 

Required qualifications, capabilities, and skills

  • Formal training or certification on software engineering concepts and 5+ years applied experience 
  • Extensive experience in managing the full lifecycle of data, from collection and storage to analysis and reporting.
  • Proficiency in one or more large-scale data processing distributions such as Java or Python Spark along with knowledge on Data Pipeline (DPL), Data Modeling, Data warehouse, Data Migration and so-on.
  • Hands-on practical experience in system design, application development, testing, and operational stability
  • Experience in developing, debugging, and maintaining code in a large corporate environment with one or more modern programming languages and database querying languages.
  • Solid understanding of agile methodologies such as CI/CD, Application Resiliency, and Security
  • Strong organizational, problem-solving, and critical thinking skills; Strong documentation skills
  • Proficiency in Python  or Java and proficiency in Spark
  • Hands-on experience on AWS services and its components along with good understanding on Kubernetes. AWS Data Services: Proficiency in Lake formation, Glue ETL (or) EMR, S3, Glue Catalog, Athena, Kinesis (or) MSK, Airflow (or) Lambda + Step Functions + Event Bridge. 
  • Data De/Serialization: Expertise in at least 2 of the formats: Parquet, Iceberg, AVRO, JSON-L
  • AWS Data Security: Good Understanding of security concepts such as: Lake formation, IAM, Service roles, Encryption, KMS, Secrets Manager

 

Preferred qualifications, capabilities, and skills

  • Familiarity with modern front-end technologies
  • Experience designing and building REST API services using Java
  • Exposure to cloud technologies - knowledge on Hybrid cloud architectures 
  • DevOps: Linux Scripting, Jenkins, Git, CI/CD, JIRA, TDD
Maintain critical data pipelines and architectures across multiple technical areas as an integral part of an agile team

Data Lead Software Engineer

at J.P. Morgan

Back to all Data Engineering jobs
J.P. Morgan logo
Bulge Bracket Investment Banks

Data Lead Software Engineer

at J.P. Morgan

Mid LevelNo visa sponsorshipData Engineering

Posted a month ago

No clicks

Compensation
Not specified

Currency: Not specified

City
New York City
Country
United States

Lead engineer on the Open Banking team within Consumer & Community Banking at JPMorgan Chase, responsible for designing and delivering end-to-end cloud-native data solutions. Architect, build, and maintain large-scale data pipelines and analytical systems on AWS, ensuring data quality, performance monitoring, resilience, and security. Hands-on with Python/Java, Spark, AWS data services (Lake Formation, Glue/EMR, S3, Athena, Kinesis/MSK), and modern data formats like Parquet, Iceberg, and Avro. Mentor and coach engineering teams across the full data lifecycle and service delivery.

Location: New York, NY, United States

Join us as we embark on a journey of collaboration and innovation, where your unique skills and talents will be valued and celebrated. Together we will create a brighter future and make a meaningful difference.

As a Lead  Software Engineer at JPMorgan Chase within Consumer & Community Banking  Open Banking  team, you are an integral part of an agile team that works to enhance, build, and deliver data collection, storage, access, and analytics solutions in a secure, stable, and scalable way. As a core technical contributor, you are responsible for maintaining critical data pipelines and architectures across multiple technical areas within various business functions in support of the firm’s business objectives.

Job responsibilities

  • Architect and oversee the design of complex data solutions that meet diverse business needs and customer requirements.
  • Guide the evolution of logical and physical data models to support emerging business use cases and technological advancements.
  • Build and manage end-to-end cloud-native data pipelines in AWS, leveraging your hands-on expertise with AWS components.
  • Build analytical systems from the ground up, providing architectural direction, translating business issues into specific requirements, and identifying appropriate data to support solutions.
  • Work across the Service Delivery Lifecycle on engineering major/minor enhancements and ongoing maintenance of existing applications.
  • Conduct feasibility studies, capacity planning, and process redesign/re-engineering of complex integration solutions.
  • Help others build code to extract raw data, coach the team on techniques to validate its quality, and apply your deep data knowledge to ensure the correct data is ingested across the pipeline.
  • Guide the development of data tools used to transform, manage, and access data, and advise the team on writing and validating code to test the storage and availability of data platforms for resilience.
  • Oversee the implementation of performance monitoring protocols across data pipelines, coaching the team on building visualizations and aggregations to monitor pipeline health.
  • Coach others on implementing solutions and self-healing processes that minimize points of failure across multiple product features.

 

Required qualifications, capabilities, and skills

  • Formal training or certification on software engineering concepts and 5+ years applied experience 
  • Extensive experience in managing the full lifecycle of data, from collection and storage to analysis and reporting.
  • Proficiency in one or more large-scale data processing distributions such as Java or Python Spark along with knowledge on Data Pipeline (DPL), Data Modeling, Data warehouse, Data Migration and so-on.
  • Hands-on practical experience in system design, application development, testing, and operational stability
  • Experience in developing, debugging, and maintaining code in a large corporate environment with one or more modern programming languages and database querying languages.
  • Solid understanding of agile methodologies such as CI/CD, Application Resiliency, and Security
  • Strong organizational, problem-solving, and critical thinking skills; Strong documentation skills
  • Proficiency in Python  or Java and proficiency in Spark
  • Hands-on experience on AWS services and its components along with good understanding on Kubernetes. AWS Data Services: Proficiency in Lake formation, Glue ETL (or) EMR, S3, Glue Catalog, Athena, Kinesis (or) MSK, Airflow (or) Lambda + Step Functions + Event Bridge. 
  • Data De/Serialization: Expertise in at least 2 of the formats: Parquet, Iceberg, AVRO, JSON-L
  • AWS Data Security: Good Understanding of security concepts such as: Lake formation, IAM, Service roles, Encryption, KMS, Secrets Manager

 

Preferred qualifications, capabilities, and skills

  • Familiarity with modern front-end technologies
  • Experience designing and building REST API services using Java
  • Exposure to cloud technologies - knowledge on Hybrid cloud architectures 
  • DevOps: Linux Scripting, Jenkins, Git, CI/CD, JIRA, TDD
Maintain critical data pipelines and architectures across multiple technical areas as an integral part of an agile team