LOG IN
SIGN UP
Tech Job Finder - Find Software, Technology Sales and Product Manager Jobs.
Sign In
OR continue with e-mail and password
E-mail address
Password
Don't have an account?
Reset password
Join Tech Job Finder
OR continue with e-mail and password
E-mail address
First name
Last name
Username
Password
Confirm Password
How did you hear about us?
By signing up, you agree to our Terms & Conditions and Privacy Policy.

Data Engineer II

at J.P. Morgan

Back to all Data Engineering jobs
J.P. Morgan logo
Bulge Bracket Investment Banks

Data Engineer II

at J.P. Morgan

JuniorNo visa sponsorshipData Engineering

Posted 18 days ago

No clicks

Compensation
Not specified

Currency: Not specified

City
Bengaluru
Country
India

Join the Employee Platforms team as a Data Engineer II to design, develop, and support Databricks-based data pipelines and integrations. You will write production-quality Python code, implement CI/CD automation, and troubleshoot pipeline and integration issues. The role involves collaborating with internal teams to promote Databricks best practices, documenting technical solutions, and contributing to a secure, scalable data platform. Continuous learning and participation in agile team activities are expected.

Location: Bengaluru, Karnataka, India

You thrive on diversity and creativity, and we welcome individuals who share our vision of making a lasting impact. Your unique combination of design thinking and experience will help us achieve new heights.


 

As a Data Engineer II at JPMorgan Chase within the Employee Platforms team, you are part of an agile team that works to enhance, design, and deliver the data collection, storage, access, and analytics solutions in a secure, stable, and scalable way. As an emerging member of a data engineering team, you are responsible for overseeing Databricks adoption and supporting internal teams as they integrate with Databricks. As a Software Engineer, you will collaborate with colleagues across the organization to deliver secure, scalable, and reliable technology solutions that drive business success.

 

 

 

Job responsibilities

 

  • Design, develop, and troubleshoot software solutions with a focus on Databricks integration and adoption
  • Build and maintain data pipelines using Databricks to ensure efficient and reliable data processing for internal teams
  • Write secure, high-quality production code in Python and participate in code reviews and debugging
  • Implement and support CI/CD pipelines to automate software delivery and improve operational stability
  • Collaborate with internal teams to provide guidance and best practices for Databricks usage
  • Participate in team discussions and activities to share knowledge and drive continuous improvement
  • Contribute to a positive team culture that values diversity, inclusion, and respect
  • Support the adoption of best practices in software engineering and data management
  • Troubleshoot and resolve issues related to data pipelines and software integration
  • Document technical processes, workflows, and solutions for team reference
  • Engage in ongoing learning to stay updated with advancements in Databricks, Python, and related technologies
  •  

    Required qualifications, capabilities, and skills

     

    • Formal training or certification on
      software engineering concepts and 2+ years applied experience 

    • Show proficiency in Python programming
    • Build and maintain data pipelines using Databricks
    • Apply practical experience with CI/CD tools and automation methods
    • Exhibit familiarity with agile development practices, application resiliency, and security
    • Participate in code reviews and debugging activities
    • Collaborate with teams to implement best practices for Databricks and data engineering
    • Troubleshoot and resolve technical issues in data pipelines
    • Document and communicate technical solutions effectively
    • Engage in continuous improvement and learning within the team environment

     

     

    Preferred Qualifications, Capabilities, and Skills

  • Demonstrate experience with AWS services such as S3, EMR, Glue, ECS/EKS, and Athena
  • Obtain certifications in AWS, Databricks, or automation tools
  • Gain exposure to open table formats like Iceberg or Delta Lake and data catalog tools such as AWS Glue Data Catalog
  • Pursue interests in cloud computing, artificial intelligence, or mobile development

  •  
    Be part of an agile team that works to enhance, design, and deliver data collection, storage, access, and analytics solutions

    Data Engineer II

    at J.P. Morgan

    Back to all Data Engineering jobs
    J.P. Morgan logo
    Bulge Bracket Investment Banks

    Data Engineer II

    at J.P. Morgan

    JuniorNo visa sponsorshipData Engineering

    Posted 18 days ago

    No clicks

    Compensation
    Not specified

    Currency: Not specified

    City
    Bengaluru
    Country
    India

    Join the Employee Platforms team as a Data Engineer II to design, develop, and support Databricks-based data pipelines and integrations. You will write production-quality Python code, implement CI/CD automation, and troubleshoot pipeline and integration issues. The role involves collaborating with internal teams to promote Databricks best practices, documenting technical solutions, and contributing to a secure, scalable data platform. Continuous learning and participation in agile team activities are expected.

    Location: Bengaluru, Karnataka, India

    You thrive on diversity and creativity, and we welcome individuals who share our vision of making a lasting impact. Your unique combination of design thinking and experience will help us achieve new heights.


     

    As a Data Engineer II at JPMorgan Chase within the Employee Platforms team, you are part of an agile team that works to enhance, design, and deliver the data collection, storage, access, and analytics solutions in a secure, stable, and scalable way. As an emerging member of a data engineering team, you are responsible for overseeing Databricks adoption and supporting internal teams as they integrate with Databricks. As a Software Engineer, you will collaborate with colleagues across the organization to deliver secure, scalable, and reliable technology solutions that drive business success.

     

     

     

    Job responsibilities

     

  • Design, develop, and troubleshoot software solutions with a focus on Databricks integration and adoption
  • Build and maintain data pipelines using Databricks to ensure efficient and reliable data processing for internal teams
  • Write secure, high-quality production code in Python and participate in code reviews and debugging
  • Implement and support CI/CD pipelines to automate software delivery and improve operational stability
  • Collaborate with internal teams to provide guidance and best practices for Databricks usage
  • Participate in team discussions and activities to share knowledge and drive continuous improvement
  • Contribute to a positive team culture that values diversity, inclusion, and respect
  • Support the adoption of best practices in software engineering and data management
  • Troubleshoot and resolve issues related to data pipelines and software integration
  • Document technical processes, workflows, and solutions for team reference
  • Engage in ongoing learning to stay updated with advancements in Databricks, Python, and related technologies
  •  

    Required qualifications, capabilities, and skills

     

    • Formal training or certification on
      software engineering concepts and 2+ years applied experience 

    • Show proficiency in Python programming
    • Build and maintain data pipelines using Databricks
    • Apply practical experience with CI/CD tools and automation methods
    • Exhibit familiarity with agile development practices, application resiliency, and security
    • Participate in code reviews and debugging activities
    • Collaborate with teams to implement best practices for Databricks and data engineering
    • Troubleshoot and resolve technical issues in data pipelines
    • Document and communicate technical solutions effectively
    • Engage in continuous improvement and learning within the team environment

     

     

    Preferred Qualifications, Capabilities, and Skills

  • Demonstrate experience with AWS services such as S3, EMR, Glue, ECS/EKS, and Athena
  • Obtain certifications in AWS, Databricks, or automation tools
  • Gain exposure to open table formats like Iceberg or Delta Lake and data catalog tools such as AWS Glue Data Catalog
  • Pursue interests in cloud computing, artificial intelligence, or mobile development

  •  
    Be part of an agile team that works to enhance, design, and deliver data collection, storage, access, and analytics solutions