LOG IN
SIGN UP
Tech Job Finder - Find Software, Technology Sales and Product Manager Jobs.
Sign In
OR continue with e-mail and password
E-mail address
Password
Don't have an account?
Reset password
Join Tech Job Finder
OR continue with e-mail and password
E-mail address
First name
Last name
Username
Password
Confirm Password
How did you hear about us?
By signing up, you agree to our Terms & Conditions and Privacy Policy.

Senior Lead Software Engineer - Databricks Platform Engineer

at J.P. Morgan

Back to all Data Engineering jobs
J.P. Morgan logo
Bulge Bracket Investment Banks

Senior Lead Software Engineer - Databricks Platform Engineer

at J.P. Morgan

Tech LeadNo visa sponsorshipData Engineering

Posted a month ago

No clicks

Compensation
Not specified

Currency: Not specified

City
Hyderabad
Country
India

Senior Lead Software Engineer on the Consumer & Community Banking Risk team responsible for driving SAS-to-PySpark migrations and production delivery on Databricks. You will build and maintain automated conversion utilities, CI/CD pipelines, and own data conversion/migration tasks including schema mapping, quality checks, reconciliation and lineage. The role includes coaching engineers, running office hours for user communities, and optimizing Databricks infrastructure for performance, cost and governance.

Location: Hyderabad, Telangana, India

Be an integral part of an agile team that's constantly pushing the envelope to enhance, build, and deliver top-notch technology products.

As a Senior Lead Software Engineer at JPMorgan Chase within the Consumer & Community Banking Risk Team, you are an integral part of an agile team that works to enhance, build, and deliver trusted market-leading technology products in a secure, stable, and scalable way. Drive significant business impact through your capabilities and contributions, and apply deep technical expertise and problem-solving methodologies to tackle a diverse array of challenges that span multiple technologies and applications.

Job responsibilities

  • Plan and deliver SAS-to-PySpark migrations on Databricks (cutover, validation, rollback).
  • Build and maintain automated conversion utilities and CI/CD pipelines.
  • Own data conversion and migration (schema mapping, quality checks, reconciliation, lineage).
  • Run office hours for user communities, coach engineers, and drive best practices and standards.
  • Optimize Databricks infrastructure (clusters, policies, performance, and cost).

 

Required qualifications, capabilities, and skills

  • Formal training or certification on software engineering concepts and 5+ years applied experience. In addition, 2 + years of experience leading technologists to manage and solve complex technical items within your domain of expertise
  • Hands On experience  in data/analytics engineering; 5+ years leading large-scale Pyspark/Databricks programs.
  • Proven SAS-to-PySpark migration delivery into production with measurable outcomes.
  • Track record in data migration, quality assurance, and operational readiness
 
Preferred qualifications, capabilities, and skills
  • Deep PySpark expertise (performance tuning, ETL functionality).
  • Databricks platform proficiency (Unity Catalog, SQL Warehouses, job/cluster policies, MLflow basics).
  • Automation and CI/CD (build/deploy pipelines, release management).
  • Cloud fundamentals (AWS S3, IAM, networking, monitoring)  
Drive significant business impact and tackle a diverse array of challenges that span multiple technologies and applications

Senior Lead Software Engineer - Databricks Platform Engineer

at J.P. Morgan

Back to all Data Engineering jobs
J.P. Morgan logo
Bulge Bracket Investment Banks

Senior Lead Software Engineer - Databricks Platform Engineer

at J.P. Morgan

Tech LeadNo visa sponsorshipData Engineering

Posted a month ago

No clicks

Compensation
Not specified

Currency: Not specified

City
Hyderabad
Country
India

Senior Lead Software Engineer on the Consumer & Community Banking Risk team responsible for driving SAS-to-PySpark migrations and production delivery on Databricks. You will build and maintain automated conversion utilities, CI/CD pipelines, and own data conversion/migration tasks including schema mapping, quality checks, reconciliation and lineage. The role includes coaching engineers, running office hours for user communities, and optimizing Databricks infrastructure for performance, cost and governance.

Location: Hyderabad, Telangana, India

Be an integral part of an agile team that's constantly pushing the envelope to enhance, build, and deliver top-notch technology products.

As a Senior Lead Software Engineer at JPMorgan Chase within the Consumer & Community Banking Risk Team, you are an integral part of an agile team that works to enhance, build, and deliver trusted market-leading technology products in a secure, stable, and scalable way. Drive significant business impact through your capabilities and contributions, and apply deep technical expertise and problem-solving methodologies to tackle a diverse array of challenges that span multiple technologies and applications.

Job responsibilities

  • Plan and deliver SAS-to-PySpark migrations on Databricks (cutover, validation, rollback).
  • Build and maintain automated conversion utilities and CI/CD pipelines.
  • Own data conversion and migration (schema mapping, quality checks, reconciliation, lineage).
  • Run office hours for user communities, coach engineers, and drive best practices and standards.
  • Optimize Databricks infrastructure (clusters, policies, performance, and cost).

 

Required qualifications, capabilities, and skills

  • Formal training or certification on software engineering concepts and 5+ years applied experience. In addition, 2 + years of experience leading technologists to manage and solve complex technical items within your domain of expertise
  • Hands On experience  in data/analytics engineering; 5+ years leading large-scale Pyspark/Databricks programs.
  • Proven SAS-to-PySpark migration delivery into production with measurable outcomes.
  • Track record in data migration, quality assurance, and operational readiness
 
Preferred qualifications, capabilities, and skills
  • Deep PySpark expertise (performance tuning, ETL functionality).
  • Databricks platform proficiency (Unity Catalog, SQL Warehouses, job/cluster policies, MLflow basics).
  • Automation and CI/CD (build/deploy pipelines, release management).
  • Cloud fundamentals (AWS S3, IAM, networking, monitoring)  
Drive significant business impact and tackle a diverse array of challenges that span multiple technologies and applications