LOG IN
SIGN UP
Tech Job Finder - Find Software, Technology Sales and Product Manager Jobs.
Sign In
OR continue with e-mail and password
E-mail address
Password
Don't have an account?
Reset password
Join Tech Job Finder
OR continue with e-mail and password
E-mail address
First name
Last name
Username
Password
Confirm Password
How did you hear about us?
By signing up, you agree to our Terms & Conditions and Privacy Policy.

Data Engineer II, Glass Liquidity and Cash Management

at J.P. Morgan

Back to all Data Engineering jobs
J.P. Morgan logo
Bulge Bracket Investment Banks

Data Engineer II, Glass Liquidity and Cash Management

at J.P. Morgan

JuniorNo visa sponsorshipData Engineering

Posted 11 days ago

No clicks

Compensation
Not specified SGD

Currency: $ (SGD)

City
Singapore
Country
Singapore

Join JPMorgan Chase's Corporate and Investment Banking Technology team as a Data Engineer II in Glass Liquidity and Cash Management in Singapore. You will design, build, and maintain robust data pipelines for extraction, transformation, and ingestion of large-scale datasets into a lakehouse, and develop data workflows using Java, Spring Boot, Apache Flink, Python, PySpark, and Databricks. You will integrate streaming data with Kafka and Flink, build Databricks pipelines for complex BI data, ensure data quality, security, and governance, monitor performance, and document data flows and architectures.

Location: Singapore

You thrive on diversity and creativity, and we welcome individuals who share our vision of making a lasting impact. Your unique combination of design thinking and experience will help us achieve new heights.

 
As a Data Engineer II at JPMorganChase within the Corporate and Investment Banking (CIB) Technology, Glass Liquidity and Cash Management team, you are part of an agile team that works to enhance, design, and deliver the data collection, storage, access, and analytics solutions in a secure, stable, and scalable way. As an emerging member of a data engineering team, you execute data solutions through the design, development, and technical troubleshooting of multiple components within a technical product, application, or system, while gaining the skills and experience needed to grow within your role.
 

Job responsibilities

 

  • Design, develop, and maintain robust data pipelines for data extraction, transformation, and ingestion of large-scale datasets into a lakehouse 
  • Build and optimize data workflows using Java, Spring Boot, Apache Flink, Python, PySpark and Databricks.
  • Integrate and process streaming data using Kafka and Apache Flink
  • Implement databricks pipelines that build complex business data aggregates for BI reporting and analytics.
  • Ensure data quality, integrity, and security across all data platforms.
  • Monitor, troubleshoot, and optimize data pipeline performance.
  • Document data flows, architecture, and processes.

 

Required qualifications, capabilities, and skills

 

  • Bachelor’s degree in computer science, Engineering, or related field 
  • Formal training or certification on software engineering concepts with 2+ years of experience
  • Strong hands-on Experience in Java, Apache Flink, Python or PySpark.
  • Strong hands-on experience in writing complex SQL/PLSQL
  • Proficient in Kafka for real-time data streaming.
  • Good understanding of data extraction, ingestion, and ETL processes.
  • Understanding of Lakehouse Architecture and working experience on Databricks platform would be advantage.
  • Experience building dashboards using Tableau would be an advantage.
  • Familiarity with Cloud-based storage technologies like AWS S3.
  • Excellent problem-solving and communication skills.
Be part of an agile team that works to enhance, design, and deliver data collection, storage, access, and analytics solutions

Data Engineer II, Glass Liquidity and Cash Management

at J.P. Morgan

Back to all Data Engineering jobs
J.P. Morgan logo
Bulge Bracket Investment Banks

Data Engineer II, Glass Liquidity and Cash Management

at J.P. Morgan

JuniorNo visa sponsorshipData Engineering

Posted 11 days ago

No clicks

Compensation
Not specified SGD

Currency: $ (SGD)

City
Singapore
Country
Singapore

Join JPMorgan Chase's Corporate and Investment Banking Technology team as a Data Engineer II in Glass Liquidity and Cash Management in Singapore. You will design, build, and maintain robust data pipelines for extraction, transformation, and ingestion of large-scale datasets into a lakehouse, and develop data workflows using Java, Spring Boot, Apache Flink, Python, PySpark, and Databricks. You will integrate streaming data with Kafka and Flink, build Databricks pipelines for complex BI data, ensure data quality, security, and governance, monitor performance, and document data flows and architectures.

Location: Singapore

You thrive on diversity and creativity, and we welcome individuals who share our vision of making a lasting impact. Your unique combination of design thinking and experience will help us achieve new heights.

 
As a Data Engineer II at JPMorganChase within the Corporate and Investment Banking (CIB) Technology, Glass Liquidity and Cash Management team, you are part of an agile team that works to enhance, design, and deliver the data collection, storage, access, and analytics solutions in a secure, stable, and scalable way. As an emerging member of a data engineering team, you execute data solutions through the design, development, and technical troubleshooting of multiple components within a technical product, application, or system, while gaining the skills and experience needed to grow within your role.
 

Job responsibilities

 

  • Design, develop, and maintain robust data pipelines for data extraction, transformation, and ingestion of large-scale datasets into a lakehouse 
  • Build and optimize data workflows using Java, Spring Boot, Apache Flink, Python, PySpark and Databricks.
  • Integrate and process streaming data using Kafka and Apache Flink
  • Implement databricks pipelines that build complex business data aggregates for BI reporting and analytics.
  • Ensure data quality, integrity, and security across all data platforms.
  • Monitor, troubleshoot, and optimize data pipeline performance.
  • Document data flows, architecture, and processes.

 

Required qualifications, capabilities, and skills

 

  • Bachelor’s degree in computer science, Engineering, or related field 
  • Formal training or certification on software engineering concepts with 2+ years of experience
  • Strong hands-on Experience in Java, Apache Flink, Python or PySpark.
  • Strong hands-on experience in writing complex SQL/PLSQL
  • Proficient in Kafka for real-time data streaming.
  • Good understanding of data extraction, ingestion, and ETL processes.
  • Understanding of Lakehouse Architecture and working experience on Databricks platform would be advantage.
  • Experience building dashboards using Tableau would be an advantage.
  • Familiarity with Cloud-based storage technologies like AWS S3.
  • Excellent problem-solving and communication skills.
Be part of an agile team that works to enhance, design, and deliver data collection, storage, access, and analytics solutions