LOG IN
SIGN UP
Tech Job Finder - Find Software, Technology Sales and Product Manager Jobs.
Sign In
OR continue with e-mail and password
E-mail address
Password
Don't have an account?
Reset password
Join Tech Job Finder
OR continue with e-mail and password
E-mail address
First name
Last name
Username
Password
Confirm Password
How did you hear about us?
By signing up, you agree to our Terms & Conditions and Privacy Policy.

Software Engineer III - Databricks, SQL , Python

at J.P. Morgan

Back to all Data Engineering jobs
J.P. Morgan logo
Bulge Bracket Investment Banks

Software Engineer III - Databricks, SQL , Python

at J.P. Morgan

Mid LevelNo visa sponsorshipData Engineering

Posted 13 days ago

No clicks

Compensation
Not specified

Currency: Not specified

City
Bengaluru
Country
India

Software Engineer III at JPMorganChase within the Chief Technology Office, focused on designing and delivering secure, scalable software solutions. Works on data lakehouse ingestion, transformation and storage using Databricks/Spark in the public cloud to support AI and analytics. Collaborates with software and data engineers and researchers on AI product development, with responsibilities spanning design, development, deployment, and maintenance. Requires 3+ years of hands-on experience in Python/SQL and data engineering concepts.

Location: Bengaluru, Karnataka, India

We have an exciting and rewarding opportunity for you to take your software engineering career to the next level. 

As a Software Engineer III at JPMorganChase within the Chief Technology Office team, you serve as a seasoned member of an agile team to design and deliver trusted market-leading technology products in a secure, stable, and scalable way. You are responsible for carrying out critical technology solutions across multiple technical areas within various business functions in support of the firm’s business objectives.

Our lake house is required to ingest, transform and store additional data sets, that will be used not only to train our AI systems, but also for analytical purposes across multiple use cases. The processing and persistence of data will be managed within the public cloud and will utilise Big Data frameworks such as Databricks and Spark. 

We are looking for a Data Engineer to help us with the design, development, deployment, delivery, and maintenance of AI products to our clients. In this role, you will be working with other software and data engineers & research scientists. You will serve as a member of an existing agile team that will deliver trusted market-leading AI technology products in a secure, stable, and scalable way.

Job responsibilities

  • Executes software solutions, design, development, and technical troubleshooting with ability to think beyond routine or conventional approaches to build solutions or break down technical problems
  • Creates secure and high-quality production code and maintains algorithms that run synchronously with appropriate systems
  • Produces architecture and design artifacts for complex applications while being accountable for ensuring design constraints are met by software code development
  • Gathers, analyzes, synthesizes, and develops visualizations and reporting from large, diverse data sets in service of continuous improvement of software applications and systems
  • Proactively identifies hidden problems and patterns in data and uses these insights to drive improvements to coding hygiene and system architecture
  • Contributes to software engineering communities of practice and events that explore new and emerging technologies

 

Required qualifications, capabilities, and skills

  • Formal training or certification on Software engineering concepts and 3+ years applied experience
  • Hands-on practical experience in system design, application development, testing, and operational stability
  • Extensive development experience using Python/SQL
  • Solid understanding of software applications and technical processes within a related technical discipline (e.g. data ingestion, data storage, data serving, APIs, etc.).
  • Hands-on experience in data lake or data warehouse and related technologies (e.g. Spark, ETL, Databricks).

 

Preferred qualifications, capabilities, and skills

  • Familiarity with modern front-end technologies
  • Exposure to cloud technologies
Design and deliver market-leading technology products in a secure and scalable way as a seasoned member of an agile team

Software Engineer III - Databricks, SQL , Python

at J.P. Morgan

Back to all Data Engineering jobs
J.P. Morgan logo
Bulge Bracket Investment Banks

Software Engineer III - Databricks, SQL , Python

at J.P. Morgan

Mid LevelNo visa sponsorshipData Engineering

Posted 13 days ago

No clicks

Compensation
Not specified

Currency: Not specified

City
Bengaluru
Country
India

Software Engineer III at JPMorganChase within the Chief Technology Office, focused on designing and delivering secure, scalable software solutions. Works on data lakehouse ingestion, transformation and storage using Databricks/Spark in the public cloud to support AI and analytics. Collaborates with software and data engineers and researchers on AI product development, with responsibilities spanning design, development, deployment, and maintenance. Requires 3+ years of hands-on experience in Python/SQL and data engineering concepts.

Location: Bengaluru, Karnataka, India

We have an exciting and rewarding opportunity for you to take your software engineering career to the next level. 

As a Software Engineer III at JPMorganChase within the Chief Technology Office team, you serve as a seasoned member of an agile team to design and deliver trusted market-leading technology products in a secure, stable, and scalable way. You are responsible for carrying out critical technology solutions across multiple technical areas within various business functions in support of the firm’s business objectives.

Our lake house is required to ingest, transform and store additional data sets, that will be used not only to train our AI systems, but also for analytical purposes across multiple use cases. The processing and persistence of data will be managed within the public cloud and will utilise Big Data frameworks such as Databricks and Spark. 

We are looking for a Data Engineer to help us with the design, development, deployment, delivery, and maintenance of AI products to our clients. In this role, you will be working with other software and data engineers & research scientists. You will serve as a member of an existing agile team that will deliver trusted market-leading AI technology products in a secure, stable, and scalable way.

Job responsibilities

  • Executes software solutions, design, development, and technical troubleshooting with ability to think beyond routine or conventional approaches to build solutions or break down technical problems
  • Creates secure and high-quality production code and maintains algorithms that run synchronously with appropriate systems
  • Produces architecture and design artifacts for complex applications while being accountable for ensuring design constraints are met by software code development
  • Gathers, analyzes, synthesizes, and develops visualizations and reporting from large, diverse data sets in service of continuous improvement of software applications and systems
  • Proactively identifies hidden problems and patterns in data and uses these insights to drive improvements to coding hygiene and system architecture
  • Contributes to software engineering communities of practice and events that explore new and emerging technologies

 

Required qualifications, capabilities, and skills

  • Formal training or certification on Software engineering concepts and 3+ years applied experience
  • Hands-on practical experience in system design, application development, testing, and operational stability
  • Extensive development experience using Python/SQL
  • Solid understanding of software applications and technical processes within a related technical discipline (e.g. data ingestion, data storage, data serving, APIs, etc.).
  • Hands-on experience in data lake or data warehouse and related technologies (e.g. Spark, ETL, Databricks).

 

Preferred qualifications, capabilities, and skills

  • Familiarity with modern front-end technologies
  • Exposure to cloud technologies
Design and deliver market-leading technology products in a secure and scalable way as a seasoned member of an agile team