LOG IN
SIGN UP
Tech Job Finder - Find Software, Technology Sales and Product Manager Jobs.
Sign In
OR continue with e-mail and password
E-mail address
Password
Don't have an account?
Reset password
Join Tech Job Finder
OR continue with e-mail and password
E-mail address
First name
Last name
Username
Password
Confirm Password
How did you hear about us?
By signing up, you agree to our Terms & Conditions and Privacy Policy.

Lead Data Engineer - Python, Pyspark

at J.P. Morgan

Back to all Data Engineering jobs
J.P. Morgan logo
Bulge Bracket Investment Banks

Lead Data Engineer - Python, Pyspark

at J.P. Morgan

Mid LevelNo visa sponsorshipData Engineering

Posted a month ago

No clicks

Compensation
Not specified

Currency: Not specified

City
Hyderabad
Country
India

Senior data engineering role on the Payments Technology team responsible for designing, building, and operating large-scale, secure data pipelines and platforms. You'll lead cross-functional collaboration to deliver batch and real-time processing solutions using Spark/Flink, Databricks, AWS Glue/EMR, and cloud-native services. The role requires strong programming skills (Python and Java), experience with data lakes, ETL, streaming (Kafka), and deploying services on Kubernetes/EKS. You will also help drive architecture, security integration, and operationalization of data services to support business objectives.

Location: Hyderabad, Telangana, India

Join us as we embark on a journey of collaboration and innovation, where your unique skills and talents will be valued and celebrated. Together we will create a brighter future and make a meaningful difference.

As a Lead Data Engineer at JPMorganChase within the Payments Technology team, you are an integral part of an agile team that works to enhance, build, and deliver data collection, storage, access, and analytics solutions in a secure, stable, and scalable way. As a core technical contributor, you are responsible for maintaining critical data pipelines and architectures across multiple technical areas within various business functions in support of the firm’s business objectives.

Job Responsibilities 

  • Lead cross-functional collaboration: Partner with stakeholders across all JPMorgan lines of business and functions to deliver robust software and data engineering solutions.
  • Drive innovation and architecture: Spearhead the experimentation, design, development, and production deployment of advanced data pipelines, data services, and data platforms that directly support business objectives.
  • Architect scalable data solutions: Design and implement highly scalable, efficient, and reliable data processing pipelines, leveraging advanced analytics to generate actionable business insights and optimize outcomes.
  • Integrate security architecture: Proactively address opportunities to unify physical, IT, and data security architectures, ensuring comprehensive access management and data protection.

     

Required Qualifications, Capabilities, and Skills

  • Formal training or certification on software engineering concepts and 5+ years applied experience
  • Extensive experience in data technologies, with formal training or certification in large-scale technology program management.
  • Advanced programming expertise: Proficient in Java and Python, with a strong track record of building and optimizing data frameworks and solutions.
  • Comprehensive data lifecycle knowledge: Deep experience in architecting and managing data frameworks, including data lakes, and overseeing the full data lifecycle.
  • Batch and real-time processing: Proven expertise in developing batch and real-time data processing solutions using Spark or Flink.
  • Cloud data processing: Hands-on experience with AWS Glue and EMR for scalable data processing tasks.
  • Databricks proficiency: Demonstrated ability to leverage Databricks for advanced analytics and data engineering.
  • Service development and deployment: Skilled in building services using Spring Boot or Flask, and deploying them on AWS EKS or Kubernetes.
  • Database management: Strong working knowledge of both relational and NoSQL databases, with experience in ETL pipeline development for batch and real-time processing, data warehousing, and NoSQL solutions.

     

Preferred Qualifications, Capabilities, and Skills

  • Cloud and containerization: Expertise in Amazon Web Services (AWS), Docker, and Kubernetes for cloud-native and containerized data solutions.
  • Big data technologies: Advanced experience with Hadoop, Spark, and Kafka for distributed data processing and streaming.
  • Distributed systems: Proven ability to design and develop distributed systems for large-scale data engineering applications.
Maintain critical data pipelines and architectures across multiple technical areas as an integral part of an agile team

Lead Data Engineer - Python, Pyspark

at J.P. Morgan

Back to all Data Engineering jobs
J.P. Morgan logo
Bulge Bracket Investment Banks

Lead Data Engineer - Python, Pyspark

at J.P. Morgan

Mid LevelNo visa sponsorshipData Engineering

Posted a month ago

No clicks

Compensation
Not specified

Currency: Not specified

City
Hyderabad
Country
India

Senior data engineering role on the Payments Technology team responsible for designing, building, and operating large-scale, secure data pipelines and platforms. You'll lead cross-functional collaboration to deliver batch and real-time processing solutions using Spark/Flink, Databricks, AWS Glue/EMR, and cloud-native services. The role requires strong programming skills (Python and Java), experience with data lakes, ETL, streaming (Kafka), and deploying services on Kubernetes/EKS. You will also help drive architecture, security integration, and operationalization of data services to support business objectives.

Location: Hyderabad, Telangana, India

Join us as we embark on a journey of collaboration and innovation, where your unique skills and talents will be valued and celebrated. Together we will create a brighter future and make a meaningful difference.

As a Lead Data Engineer at JPMorganChase within the Payments Technology team, you are an integral part of an agile team that works to enhance, build, and deliver data collection, storage, access, and analytics solutions in a secure, stable, and scalable way. As a core technical contributor, you are responsible for maintaining critical data pipelines and architectures across multiple technical areas within various business functions in support of the firm’s business objectives.

Job Responsibilities 

  • Lead cross-functional collaboration: Partner with stakeholders across all JPMorgan lines of business and functions to deliver robust software and data engineering solutions.
  • Drive innovation and architecture: Spearhead the experimentation, design, development, and production deployment of advanced data pipelines, data services, and data platforms that directly support business objectives.
  • Architect scalable data solutions: Design and implement highly scalable, efficient, and reliable data processing pipelines, leveraging advanced analytics to generate actionable business insights and optimize outcomes.
  • Integrate security architecture: Proactively address opportunities to unify physical, IT, and data security architectures, ensuring comprehensive access management and data protection.

     

Required Qualifications, Capabilities, and Skills

  • Formal training or certification on software engineering concepts and 5+ years applied experience
  • Extensive experience in data technologies, with formal training or certification in large-scale technology program management.
  • Advanced programming expertise: Proficient in Java and Python, with a strong track record of building and optimizing data frameworks and solutions.
  • Comprehensive data lifecycle knowledge: Deep experience in architecting and managing data frameworks, including data lakes, and overseeing the full data lifecycle.
  • Batch and real-time processing: Proven expertise in developing batch and real-time data processing solutions using Spark or Flink.
  • Cloud data processing: Hands-on experience with AWS Glue and EMR for scalable data processing tasks.
  • Databricks proficiency: Demonstrated ability to leverage Databricks for advanced analytics and data engineering.
  • Service development and deployment: Skilled in building services using Spring Boot or Flask, and deploying them on AWS EKS or Kubernetes.
  • Database management: Strong working knowledge of both relational and NoSQL databases, with experience in ETL pipeline development for batch and real-time processing, data warehousing, and NoSQL solutions.

     

Preferred Qualifications, Capabilities, and Skills

  • Cloud and containerization: Expertise in Amazon Web Services (AWS), Docker, and Kubernetes for cloud-native and containerized data solutions.
  • Big data technologies: Advanced experience with Hadoop, Spark, and Kafka for distributed data processing and streaming.
  • Distributed systems: Proven ability to design and develop distributed systems for large-scale data engineering applications.
Maintain critical data pipelines and architectures across multiple technical areas as an integral part of an agile team