LOG IN
SIGN UP
Tech Job Finder - Find Software, Technology Sales and Product Manager Jobs.
Sign In
OR continue with e-mail and password
E-mail address
Password
Don't have an account?
Reset password
Join Tech Job Finder
OR continue with e-mail and password
E-mail address
First name
Last name
Username
Password
Confirm Password
How did you hear about us?
By signing up, you agree to our Terms & Conditions and Privacy Policy.

Lead Software Engineer Databricks, PySpark & AWS

at J.P. Morgan

Back to all Data Engineering jobs
J.P. Morgan logo
Bulge Bracket Investment Banks

Lead Software Engineer Databricks, PySpark & AWS

at J.P. Morgan

Tech LeadNo visa sponsorshipData Engineering

Posted 21 days ago

No clicks

Compensation
Not specified

Currency: Not specified

City
Hyderabad
Country
India

Senior technical role at JPMorgan Chase responsible for designing, building, and maintaining large-scale data pipelines using AWS, Databricks, Spark and PySpark. The role involves architecture and design decisions, collaborating with cross-functional teams, implementing ETL processes, and ensuring data quality, performance and governance. You will also guide engineers, resolve complex technical issues, and implement cloud-native deployment and CI/CD practices.

Location: Hyderabad, Telangana, India

We have an opportunity to impact your career and provide an adventure where you can push the limits of what's possible.

As a Lead Software Engineer at JPMorgan Chase within Corporate Technology, you play a vital role in an agile team dedicated to enhancing, building, and delivering reliable, market-leading technology products in a secure, stable, and scalable manner. As a key technical contributor, you are tasked with implementing essential technology solutions across diverse technical domains, supporting various business functions to achieve the firm's strategic goals.

Job responsibilities

  • Develop appropriate level designs and ensure consensus from peers where necessary.
  • Collaborate with software engineers and cross-functional teams to design and implement deployment strategies using AWS Cloud and Databricks pipelines.
  • Work with software engineers and teams to design, develop, test, and implement solutions within applications.
  • Engage with technical experts, key stakeholders, and team members to resolve complex problems effectively.
  • Understand leadership objectives and proactively address issues before they impact customers.
  • Design, develop, and maintain robust data pipelines to ingest, process, and store large volumes of data from various sources.
  • Implement ETL (Extract, Transform, Load) processes to ensure data quality and integrity using tools like Apache Spark and PySpark.
  • Monitor and optimize the performance of data systems and pipelines.
  • Implement best practices for data storage, retrieval, and processing
  • Maintain comprehensive documentation of data systems, processes, and workflows.
  • Ensure compliance with data governance and security policies

Required qualifications, capabilities, and skills

  • Formal training or certification on software engineering concepts and 5+ years applied experience
  • Formal training or certification in AWS/Databricks with 10+ years of applied experience.
  • Expertise in programming languages such as Python and PySpark.
  • 10+ years of professional experience in designing and implementing data pipelines in a cloud environment.
  • Proficient in design, architecture, and development using AWS Services, Databricks, Spark, Snowflake, etc.
  • Experience with continuous integration and continuous delivery tools like Jenkins, GitLab, or Terraform.
  • Familiarity with container and container orchestration technologies such as ECS, Kubernetes, and Docker.
  • Ability to troubleshoot common Big Data and Cloud technologies and issues.
  • Practical cloud native experience

Preferred qualifications, capabilities, and skills

  • 5+ years of experience in leading and developing data solutions in the AWS cloud.
  • 10+ years of experience in building, implementing, and managing data pipelines using Databricks on Spark or similar cloud technologies
Lead Software Engineer: Data Engineering (AWS/Databricks/SQL/Python/Pyspark)

Lead Software Engineer Databricks, PySpark & AWS

at J.P. Morgan

Back to all Data Engineering jobs
J.P. Morgan logo
Bulge Bracket Investment Banks

Lead Software Engineer Databricks, PySpark & AWS

at J.P. Morgan

Tech LeadNo visa sponsorshipData Engineering

Posted 21 days ago

No clicks

Compensation
Not specified

Currency: Not specified

City
Hyderabad
Country
India

Senior technical role at JPMorgan Chase responsible for designing, building, and maintaining large-scale data pipelines using AWS, Databricks, Spark and PySpark. The role involves architecture and design decisions, collaborating with cross-functional teams, implementing ETL processes, and ensuring data quality, performance and governance. You will also guide engineers, resolve complex technical issues, and implement cloud-native deployment and CI/CD practices.

Location: Hyderabad, Telangana, India

We have an opportunity to impact your career and provide an adventure where you can push the limits of what's possible.

As a Lead Software Engineer at JPMorgan Chase within Corporate Technology, you play a vital role in an agile team dedicated to enhancing, building, and delivering reliable, market-leading technology products in a secure, stable, and scalable manner. As a key technical contributor, you are tasked with implementing essential technology solutions across diverse technical domains, supporting various business functions to achieve the firm's strategic goals.

Job responsibilities

  • Develop appropriate level designs and ensure consensus from peers where necessary.
  • Collaborate with software engineers and cross-functional teams to design and implement deployment strategies using AWS Cloud and Databricks pipelines.
  • Work with software engineers and teams to design, develop, test, and implement solutions within applications.
  • Engage with technical experts, key stakeholders, and team members to resolve complex problems effectively.
  • Understand leadership objectives and proactively address issues before they impact customers.
  • Design, develop, and maintain robust data pipelines to ingest, process, and store large volumes of data from various sources.
  • Implement ETL (Extract, Transform, Load) processes to ensure data quality and integrity using tools like Apache Spark and PySpark.
  • Monitor and optimize the performance of data systems and pipelines.
  • Implement best practices for data storage, retrieval, and processing
  • Maintain comprehensive documentation of data systems, processes, and workflows.
  • Ensure compliance with data governance and security policies

Required qualifications, capabilities, and skills

  • Formal training or certification on software engineering concepts and 5+ years applied experience
  • Formal training or certification in AWS/Databricks with 10+ years of applied experience.
  • Expertise in programming languages such as Python and PySpark.
  • 10+ years of professional experience in designing and implementing data pipelines in a cloud environment.
  • Proficient in design, architecture, and development using AWS Services, Databricks, Spark, Snowflake, etc.
  • Experience with continuous integration and continuous delivery tools like Jenkins, GitLab, or Terraform.
  • Familiarity with container and container orchestration technologies such as ECS, Kubernetes, and Docker.
  • Ability to troubleshoot common Big Data and Cloud technologies and issues.
  • Practical cloud native experience

Preferred qualifications, capabilities, and skills

  • 5+ years of experience in leading and developing data solutions in the AWS cloud.
  • 10+ years of experience in building, implementing, and managing data pipelines using Databricks on Spark or similar cloud technologies
Lead Software Engineer: Data Engineering (AWS/Databricks/SQL/Python/Pyspark)