LOG IN
SIGN UP
Tech Job Finder - Find Software, Technology Sales and Product Manager Jobs.
Sign In
OR continue with e-mail and password
E-mail address
Password
Don't have an account?
Reset password
Join Tech Job Finder
OR continue with e-mail and password
E-mail address
Username
Password
Confirm Password
How did you hear about us?
By signing up, you agree to our Terms & Conditions and Privacy Policy.

Software Engineer II

at J.P. Morgan

Back to all Data Engineering jobs
J.P. Morgan logo
Bulge Bracket Investment Banks

Software Engineer II

at J.P. Morgan

Mid LevelNo visa sponsorshipData Engineering

Posted 19 hours ago

No clicks

Compensation
Not specified

Currency: Not specified

City
Mumbai
Country
India

As a Software Engineer II you'll be a seasoned member of an agile team within Asset and Wealth Management, responsible for designing and delivering secure, scalable data solutions. You will lead the design, development and implementation of scalable data pipelines and ETL batches using Python/PySpark on cloud platforms (primarily AWS), employing infrastructure-as-code and CI/CD practices. The role includes mentoring and managing data engineers, collaborating with stakeholders to translate requirements into technical solutions, and optimizing cloud data infrastructure for performance and reliability. You will also implement data governance and troubleshoot production pipelines while staying current with emerging technologies.

Location: Mumbai, Maharashtra, India

We have an exciting and rewarding opportunity for you to take your software engineering career to the next level. 

As a Software Engineer III at JPMorgan Chase within the Asset and Wealth Management, you serve as a seasoned member of an agile team to design and deliver trusted market-leading technology products in a secure, stable, and scalable way. You are responsible for carrying out critical technology solutions across multiple technical areas within various business functions in support of the firm’s business objectives.

Job responsibilities

  • Lead the design, development, and implementation of scalable data pipelines and ETL batches using Python/PySpark on AWS.
  • Execute standard software solutions, design, development, and technical troubleshooting
  • Use infrastructure as code to build applications to orchestrate and monitor data pipelines, create and manage on-demand compute resources on cloud programmatically, create frameworks to ingest and distribute data at scale.
  • Manage and mentor a team of data engineers, providing guidance and support to ensure successful product delivery and support.
  • Collaborate proactively with stakeholders, users and technology teams to understand business/technical requirements and translate them into technical solutions.
  • Optimize and maintain data infrastructure on cloud platform, ensuring scalability, reliability, and performance.
  • Implement data governance and best practices to ensure data quality and compliance with organizational standards.
  • Monitor and troubleshoot application and data pipelines, identifying and resolving issues in a timely manner.
  • Stay up-to-date with emerging technologies and industry trends to drive innovation and continuous improvement.
  • Add to team culture of diversity, opportunity, inclusion, and respect.

 

Required qualifications, capabilities, and skills

  • Formal training or certification on software engineering concepts and 5+ years applied experience
  • Experience in software development and data engineering, with demonstrable hands-on experience in Python and PySpark.
  • Proven experience with cloud platforms such as AWS, Azure, or Google Cloud.
  • Good understanding of data modeling, data architecture, ETL processes, and data warehousing concepts.
  • Experience or good knowledge of cloud native ETL platforms like Snowflake and/or Databricks.
  • Experience with big data technologies and services like AWS EMRs, Redshift, Lambda, S3.
  • Proven experience with efficient Cloud DevOps practices and CI/CD tools like Jenkins/Gitlab, for data engineering platforms.
  • Good knowledge of SQL and NoSQL databases, including performance tuning and optimization.
  • Experience with declarative infra provisioning tools like Terraform, Ansible or CloudFormation.
  • Strong analytical skills to troubleshoot issues and optimize data processes, working independently and collaboratively.

 

Preferred qualifications, capabilities, and skills

  • Knowledge of machine learning model lifecycle, language models and cloud-native MLOps pipelines and frameworks is a plus.
  • Familiarity with data visualization tools and data integration patterns.
Design and deliver market-leading technology products in a secure and scalable way as a seasoned member of an agile team

Software Engineer II

at J.P. Morgan

Back to all Data Engineering jobs
J.P. Morgan logo
Bulge Bracket Investment Banks

Software Engineer II

at J.P. Morgan

Mid LevelNo visa sponsorshipData Engineering

Posted 19 hours ago

No clicks

Compensation
Not specified

Currency: Not specified

City
Mumbai
Country
India

As a Software Engineer II you'll be a seasoned member of an agile team within Asset and Wealth Management, responsible for designing and delivering secure, scalable data solutions. You will lead the design, development and implementation of scalable data pipelines and ETL batches using Python/PySpark on cloud platforms (primarily AWS), employing infrastructure-as-code and CI/CD practices. The role includes mentoring and managing data engineers, collaborating with stakeholders to translate requirements into technical solutions, and optimizing cloud data infrastructure for performance and reliability. You will also implement data governance and troubleshoot production pipelines while staying current with emerging technologies.

Location: Mumbai, Maharashtra, India

We have an exciting and rewarding opportunity for you to take your software engineering career to the next level. 

As a Software Engineer III at JPMorgan Chase within the Asset and Wealth Management, you serve as a seasoned member of an agile team to design and deliver trusted market-leading technology products in a secure, stable, and scalable way. You are responsible for carrying out critical technology solutions across multiple technical areas within various business functions in support of the firm’s business objectives.

Job responsibilities

  • Lead the design, development, and implementation of scalable data pipelines and ETL batches using Python/PySpark on AWS.
  • Execute standard software solutions, design, development, and technical troubleshooting
  • Use infrastructure as code to build applications to orchestrate and monitor data pipelines, create and manage on-demand compute resources on cloud programmatically, create frameworks to ingest and distribute data at scale.
  • Manage and mentor a team of data engineers, providing guidance and support to ensure successful product delivery and support.
  • Collaborate proactively with stakeholders, users and technology teams to understand business/technical requirements and translate them into technical solutions.
  • Optimize and maintain data infrastructure on cloud platform, ensuring scalability, reliability, and performance.
  • Implement data governance and best practices to ensure data quality and compliance with organizational standards.
  • Monitor and troubleshoot application and data pipelines, identifying and resolving issues in a timely manner.
  • Stay up-to-date with emerging technologies and industry trends to drive innovation and continuous improvement.
  • Add to team culture of diversity, opportunity, inclusion, and respect.

 

Required qualifications, capabilities, and skills

  • Formal training or certification on software engineering concepts and 5+ years applied experience
  • Experience in software development and data engineering, with demonstrable hands-on experience in Python and PySpark.
  • Proven experience with cloud platforms such as AWS, Azure, or Google Cloud.
  • Good understanding of data modeling, data architecture, ETL processes, and data warehousing concepts.
  • Experience or good knowledge of cloud native ETL platforms like Snowflake and/or Databricks.
  • Experience with big data technologies and services like AWS EMRs, Redshift, Lambda, S3.
  • Proven experience with efficient Cloud DevOps practices and CI/CD tools like Jenkins/Gitlab, for data engineering platforms.
  • Good knowledge of SQL and NoSQL databases, including performance tuning and optimization.
  • Experience with declarative infra provisioning tools like Terraform, Ansible or CloudFormation.
  • Strong analytical skills to troubleshoot issues and optimize data processes, working independently and collaboratively.

 

Preferred qualifications, capabilities, and skills

  • Knowledge of machine learning model lifecycle, language models and cloud-native MLOps pipelines and frameworks is a plus.
  • Familiarity with data visualization tools and data integration patterns.
Design and deliver market-leading technology products in a secure and scalable way as a seasoned member of an agile team