LOG IN
SIGN UP
Tech Job Finder - Find Software, Technology Sales and Product Manager Jobs.
Sign In
OR continue with e-mail and password
E-mail address
Password
Don't have an account?
Reset password
Join Tech Job Finder
OR continue with e-mail and password
E-mail address
First name
Last name
Username
Password
Confirm Password
How did you hear about us?
By signing up, you agree to our Terms & Conditions and Privacy Policy.

Senior Lead Software Engineer

at J.P. Morgan

Back to all Data Engineering jobs
J.P. Morgan logo
Bulge Bracket Investment Banks

Senior Lead Software Engineer

at J.P. Morgan

Tech LeadNo visa sponsorshipData Engineering

Posted 16 hours ago

No clicks

Compensation
Not specified

Currency: Not specified

City
Not specified
Country
India

Lead a high-performing engineering team to build and operate cloud-native data platforms that ingest, process, and serve large-scale batch and streaming data. Own end-to-end delivery for data warehousing and multi-terabyte to petabyte-scale migrations with emphasis on reliability, performance, and cost efficiency. Provide technical leadership across Java, Python, Spark, Snowflake, and AWS, while advancing Agile practices, CI/CD automation, and operational excellence. Define and evolve data platform architecture, roadmaps, SLAs, monitoring, incident response, and drive a culture of ownership, collaboration, and continuous learning.

Location: Hyderabad, Telangana, India

When you mentor and advise multiple technical teams and move financial technologies forward, it’s a big challenge with big impact. You were made for this. 

 

As a Senior Lead Software Engineer of Software Engineering at JPMorganChase within the Consumer and Community Banking Technology Team, you serve in a leadership role by providing technical coaching and advisory for multiple technical teams, as well as anticipate the needs and potential dependencies of other functions within the firm.

Job responsibilities

  • Lead a high‑performing engineering team building and operating cloud‑native data platforms that ingest, process, and serve large‑scale batch and streaming data.
  • Own end‑to‑end delivery for data warehousing and multi‑terabyte to petabyte‑scale migration initiatives, ensuring reliability, performance, security, and cost efficiency.
  • Provide technical leadership across Java, Python, Spark, Snowflake, and AWS, while driving strong Agile practice, CI/CD automation, and operational excellence.
  • Foster a culture of ownership, collaboration, psychological safety, and continuous learning.
  • Own multi‑workstream roadmaps; plan releases, define milestones, track progress, and remove blockers to hit scope, schedule, and quality targets.
  • Create detailed estimates and work breakdown structures, manage dependencies, risks, and stakeholder expectations.
  • Define and evolve data platform architecture for batch and streaming use cases (event‑driven, microservices, warehouse patterns).
  • Lead large‑scale data migration (on‑prem to cloud), data profiling, reconciliation, lineage, quality (e.g., deduplication, schema validation), and backfills.
  • Define SLAs; implement monitoring/alerting; lead incident response, root cause analysis, and continuous improvement.
  • Contribute to team culture of diversity, opportunity, and respect.

 

Required qualifications, capabilities, and skills

  • Hands-on practical experience in system design, application development, testing and operational stability and CI-CD process.
  • Hands-on practical experience in developing spark-based Frameworks for end-to-end ETL, ELT & reporting solutions using key components like Spark Core & Spark Streaming. 
  • Proficient in coding in one or more Coding languages – Java or Python
  • Experience building Data Warehouse platform. 
  • Cloud implementation experience with AWS including:
    • AWS Data Services: Proficiency in Lake formation, Glue ETL (or) EMR, S3, Glue Catalog, Athena, Airflow (or) Lambda + Step Functions + Event Bridge
    • Data De/Serialization: Expertise in at least 2 of the formats: Parquet, Iceberg, AVRO, JSON
    • AWS Data Security: Good Understanding of security concepts such as: Lake formation, IAM, Service roles, Encryption, KMS, Secrets Manager
  • Experience in Snowflake and Databricks.
  • Experience in Gen AI skills.

 

Preferred qualifications, capabilities, and skills

 

  • Experience working at code level
  • In-depth knowledge of the financial services industry and their IT systems. Practical cloud native experience preferably AWS.
Serve in a leadership role by providing technical coaching and advisory for multiple technical teams

Senior Lead Software Engineer

at J.P. Morgan

Back to all Data Engineering jobs
J.P. Morgan logo
Bulge Bracket Investment Banks

Senior Lead Software Engineer

at J.P. Morgan

Tech LeadNo visa sponsorshipData Engineering

Posted 16 hours ago

No clicks

Compensation
Not specified

Currency: Not specified

City
Not specified
Country
India

Lead a high-performing engineering team to build and operate cloud-native data platforms that ingest, process, and serve large-scale batch and streaming data. Own end-to-end delivery for data warehousing and multi-terabyte to petabyte-scale migrations with emphasis on reliability, performance, and cost efficiency. Provide technical leadership across Java, Python, Spark, Snowflake, and AWS, while advancing Agile practices, CI/CD automation, and operational excellence. Define and evolve data platform architecture, roadmaps, SLAs, monitoring, incident response, and drive a culture of ownership, collaboration, and continuous learning.

Location: Hyderabad, Telangana, India

When you mentor and advise multiple technical teams and move financial technologies forward, it’s a big challenge with big impact. You were made for this. 

 

As a Senior Lead Software Engineer of Software Engineering at JPMorganChase within the Consumer and Community Banking Technology Team, you serve in a leadership role by providing technical coaching and advisory for multiple technical teams, as well as anticipate the needs and potential dependencies of other functions within the firm.

Job responsibilities

  • Lead a high‑performing engineering team building and operating cloud‑native data platforms that ingest, process, and serve large‑scale batch and streaming data.
  • Own end‑to‑end delivery for data warehousing and multi‑terabyte to petabyte‑scale migration initiatives, ensuring reliability, performance, security, and cost efficiency.
  • Provide technical leadership across Java, Python, Spark, Snowflake, and AWS, while driving strong Agile practice, CI/CD automation, and operational excellence.
  • Foster a culture of ownership, collaboration, psychological safety, and continuous learning.
  • Own multi‑workstream roadmaps; plan releases, define milestones, track progress, and remove blockers to hit scope, schedule, and quality targets.
  • Create detailed estimates and work breakdown structures, manage dependencies, risks, and stakeholder expectations.
  • Define and evolve data platform architecture for batch and streaming use cases (event‑driven, microservices, warehouse patterns).
  • Lead large‑scale data migration (on‑prem to cloud), data profiling, reconciliation, lineage, quality (e.g., deduplication, schema validation), and backfills.
  • Define SLAs; implement monitoring/alerting; lead incident response, root cause analysis, and continuous improvement.
  • Contribute to team culture of diversity, opportunity, and respect.

 

Required qualifications, capabilities, and skills

  • Hands-on practical experience in system design, application development, testing and operational stability and CI-CD process.
  • Hands-on practical experience in developing spark-based Frameworks for end-to-end ETL, ELT & reporting solutions using key components like Spark Core & Spark Streaming. 
  • Proficient in coding in one or more Coding languages – Java or Python
  • Experience building Data Warehouse platform. 
  • Cloud implementation experience with AWS including:
    • AWS Data Services: Proficiency in Lake formation, Glue ETL (or) EMR, S3, Glue Catalog, Athena, Airflow (or) Lambda + Step Functions + Event Bridge
    • Data De/Serialization: Expertise in at least 2 of the formats: Parquet, Iceberg, AVRO, JSON
    • AWS Data Security: Good Understanding of security concepts such as: Lake formation, IAM, Service roles, Encryption, KMS, Secrets Manager
  • Experience in Snowflake and Databricks.
  • Experience in Gen AI skills.

 

Preferred qualifications, capabilities, and skills

 

  • Experience working at code level
  • In-depth knowledge of the financial services industry and their IT systems. Practical cloud native experience preferably AWS.
Serve in a leadership role by providing technical coaching and advisory for multiple technical teams

SIMILAR OPPORTUNITIES

No similar jobs available at the moment.