LOG IN
SIGN UP
Tech Job Finder - Find Software, Technology Sales and Product Manager Jobs.
Sign In
OR continue with e-mail and password
E-mail address
Password
Don't have an account?
Reset password
Join Tech Job Finder
OR continue with e-mail and password
E-mail address
First name
Last name
Username
Password
Confirm Password
How did you hear about us?
By signing up, you agree to our Terms & Conditions and Privacy Policy.

Data Engineer III-Databricks, SQL

at J.P. Morgan

Back to all Data Engineering jobs
J.P. Morgan logo
Industry not specified

Data Engineer III-Databricks, SQL

at J.P. Morgan

Mid LevelNo visa sponsorshipData Engineering

Posted 15 hours ago

No clicks

Compensation
Not specified

Currency: Not specified

City
Bengaluru
Country
Not specified

Join JPMorgan Chase as a Data Engineer III in the Employee Platforms team to design, build, and maintain secure, scalable data pipelines and architectures across multiple business functions. You will develop and test ELT/ETL workflows using SQL, Spark, and Databricks, and ensure data quality, lineage, and governance. The role involves implementing infrastructure as code with Terraform in AWS, optimizing pipeline performance and cost, and collaborating with stakeholders in an agile environment to deliver trusted data products.

Location: Bengaluru, Karnataka, India

Be part of a dynamic team where your distinctive skills will contribute to a winning culture and team.


 
As a Data Engineer III at JPMorgan Chase within the Employee Platforms team, you serve as a seasoned member of an agile team to design and deliver trusted data collection, storage, access, and analytics solutions in a secure, stable, and scalable way. You are responsible for developing, testing, and maintaining critical data pipelines and architectures across multiple technical areas within various business functions in support of the firm’s business objectives.

 

 

Job responsibilities

 

  • Design and deliver trusted data collection, storage, access, and analytics solutions that are secure, stable, and scalable.
  • Develop, test, and maintain critical data pipelines and data architectures across multiple technical areas and business functions.
  • Build and optimize ELT/ETL workflows using SQL, Spark, and Databricks to support analytical and operational use cases.
  • Support review and implementation of controls to protect enterprise data and meet governance and compliance requirements.
  • Advise stakeholders and implement custom configuration changes in one to two tools to generate requested business products.
  • Update logical and physical data models based on evolving use cases and sources.
  • Use advanced SQL (joins, aggregations) and apply NoSQL where appropriate based on workload and access patterns.
  • Implement and automate CI/CD for data pipelines and infrastructure as code with Terraform in AWS environments.
  • Monitor, troubleshoot, and improve pipeline performance, reliability, and cost efficiency.
  • Implement data quality checks, lineage, and metadata management across the data lifecycle.
  • Contribute to an agile team culture of diversity, opportunity, inclusion, respect, and continuous improvement.
  •  

    Required qualifications, capabilities, and skills

    • Formal training or certification on software engineering concepts and 3+ years applied experience
    • Demonstrate end‑to‑end experience across the data lifecycle, including ingestion, modeling, transformation, and serving.
    • Design and implement data models for analytical and operational workloads.
    • Use Terraform and AWS services to build cloud‑native, infrastructure‑as‑code solutions.
    • Write advanced SQL for complex joins, aggregations, and performance tuning.
    • Apply working knowledge of Spark to build scalable, distributed data processing jobs.
    • Program in Python to create reliable data pipelines and automation.
    • Implement Medallion architecture patterns and build robust pipelines on Databricks.
    • Apply CI/CD practices and tools to automate build, test, and deployment of data pipelines.
    • Perform statistical data analysis and select appropriate tools and data patterns to meet business analysis needs.
     
     
    Preferred qualifications, capabilities, and skills
     
  •  Hands‑on relevent software development experience.
  • Program in multiple modern languages, with Python preferred.
  • Work with relational and NoSQL databases.
  • Use CI/CD tools such as Jules and version control systems like BitBucket and Git.
  •  

    Develop, test, and maintain critical data pipelines and architectures across multiple technical areas

    Data Engineer III-Databricks, SQL

    at J.P. Morgan

    Back to all Data Engineering jobs
    J.P. Morgan logo
    Industry not specified

    Data Engineer III-Databricks, SQL

    at J.P. Morgan

    Mid LevelNo visa sponsorshipData Engineering

    Posted 15 hours ago

    No clicks

    Compensation
    Not specified

    Currency: Not specified

    City
    Bengaluru
    Country
    Not specified

    Join JPMorgan Chase as a Data Engineer III in the Employee Platforms team to design, build, and maintain secure, scalable data pipelines and architectures across multiple business functions. You will develop and test ELT/ETL workflows using SQL, Spark, and Databricks, and ensure data quality, lineage, and governance. The role involves implementing infrastructure as code with Terraform in AWS, optimizing pipeline performance and cost, and collaborating with stakeholders in an agile environment to deliver trusted data products.

    Location: Bengaluru, Karnataka, India

    Be part of a dynamic team where your distinctive skills will contribute to a winning culture and team.


     
    As a Data Engineer III at JPMorgan Chase within the Employee Platforms team, you serve as a seasoned member of an agile team to design and deliver trusted data collection, storage, access, and analytics solutions in a secure, stable, and scalable way. You are responsible for developing, testing, and maintaining critical data pipelines and architectures across multiple technical areas within various business functions in support of the firm’s business objectives.

     

     

    Job responsibilities

     

  • Design and deliver trusted data collection, storage, access, and analytics solutions that are secure, stable, and scalable.
  • Develop, test, and maintain critical data pipelines and data architectures across multiple technical areas and business functions.
  • Build and optimize ELT/ETL workflows using SQL, Spark, and Databricks to support analytical and operational use cases.
  • Support review and implementation of controls to protect enterprise data and meet governance and compliance requirements.
  • Advise stakeholders and implement custom configuration changes in one to two tools to generate requested business products.
  • Update logical and physical data models based on evolving use cases and sources.
  • Use advanced SQL (joins, aggregations) and apply NoSQL where appropriate based on workload and access patterns.
  • Implement and automate CI/CD for data pipelines and infrastructure as code with Terraform in AWS environments.
  • Monitor, troubleshoot, and improve pipeline performance, reliability, and cost efficiency.
  • Implement data quality checks, lineage, and metadata management across the data lifecycle.
  • Contribute to an agile team culture of diversity, opportunity, inclusion, respect, and continuous improvement.
  •  

    Required qualifications, capabilities, and skills

    • Formal training or certification on software engineering concepts and 3+ years applied experience
    • Demonstrate end‑to‑end experience across the data lifecycle, including ingestion, modeling, transformation, and serving.
    • Design and implement data models for analytical and operational workloads.
    • Use Terraform and AWS services to build cloud‑native, infrastructure‑as‑code solutions.
    • Write advanced SQL for complex joins, aggregations, and performance tuning.
    • Apply working knowledge of Spark to build scalable, distributed data processing jobs.
    • Program in Python to create reliable data pipelines and automation.
    • Implement Medallion architecture patterns and build robust pipelines on Databricks.
    • Apply CI/CD practices and tools to automate build, test, and deployment of data pipelines.
    • Perform statistical data analysis and select appropriate tools and data patterns to meet business analysis needs.
     
     
    Preferred qualifications, capabilities, and skills
     
  •  Hands‑on relevent software development experience.
  • Program in multiple modern languages, with Python preferred.
  • Work with relational and NoSQL databases.
  • Use CI/CD tools such as Jules and version control systems like BitBucket and Git.
  •  

    Develop, test, and maintain critical data pipelines and architectures across multiple technical areas

    SIMILAR OPPORTUNITIES

    No similar jobs available at the moment.