LOG IN
SIGN UP
Tech Job Finder - Find Software, Technology Sales and Product Manager Jobs.
Sign In
OR continue with e-mail and password
E-mail address
Password
Don't have an account?
Reset password
Join Tech Job Finder
OR continue with e-mail and password
E-mail address
First name
Last name
Username
Password
Confirm Password
How did you hear about us?
By signing up, you agree to our Terms & Conditions and Privacy Policy.

Data Engineer - Spark, AWS

at J.P. Morgan

Back to all Data Engineering jobs
J.P. Morgan logo
Bulge Bracket Investment Banks

Data Engineer - Spark, AWS

at J.P. Morgan

Mid LevelNo visa sponsorshipData Engineering

Posted 14 days ago

No clicks

Compensation
Not specified

Currency: Not specified

City
Hyderabad
Country
India

JPMorgan Chase is hiring a Data Engineer II for its CCB-Connected Commerce team to design, develop, and maintain data collection, storage, access, and analytics solutions. The role involves building scalable, secure data pipelines and contributing to data modeling, migration, and system configuration. Candidates should have hands-on experience with JavaSpark/PySpark, AWS, strong SQL skills, and familiarity with NoSQL and domain-driven/microservices patterns.

Location: Hyderabad, Telangana, India

You thrive on diversity and creativity, and we welcome individuals who share our vision of making a lasting impact. Your unique combination of design thinking and experience will help us achieve new heights.

 
As a Data Engineer II at JPMorganChase within the CCB-Connected Commerce space, you are part of an agile team that works to enhance, design, and deliver the data collection, storage, access, and analytics solutions in a secure, stable, and scalable way. As an emerging member of a data engineering team, you execute data solutions through the design, development, and technical troubleshooting of multiple components within a technical product, application, or system, while gaining the skills and experience needed to grow within your role.
 

Job responsibilities

  • Organizes, updates, and maintains gathered data that will aid in making the data actionable
  • Demonstrates basic knowledge of the data system components to determine controls needed to ensure secure data access
  • Be responsible for making custom configuration changes in one to two tools to generate a product at the business or customer request
  • Updates logical or physical data models based on new use cases with minimal supervision
  • Adds to team culture of diversity, opportunity, inclusion, and respect

 

Required qualifications, capabilities, and skills

  • Formal training or certification on software development concepts and 3-plus years applied experience in JavaSpark/PySpark & AWS
  • Proficiency in one or more large-scale data processing distributions such as JavaSpark along with knowledge on Data Pipeline (DPL), Data Modeling, Data warehouse, Data Migration and so-on.
  • Advanced at SQL (e.g., joins and aggregations)
  • Working understanding of NoSQL databases
  • Significant experience with statistical data analysis and ability to determine appropriate tools to perform analysis
  • Basic knowledge of data system components to determine controls needed
  • Good understanding of domain driven design, micro-services patterns, and architecture
Preferred qualifications, capabilities, and skills
 
  • Familiarity with modern front-end technologies
  • Experience designing and building REST API services using Java
  • Exposure to cloud technologies - knowledge on Hybrid cloud architectures is highly desirable.
  • Good to have AWS Certification.
     
Be part of an agile team that works to enhance, design, and deliver data collection, storage, access, and analytics solutions

Data Engineer - Spark, AWS

at J.P. Morgan

Back to all Data Engineering jobs
J.P. Morgan logo
Bulge Bracket Investment Banks

Data Engineer - Spark, AWS

at J.P. Morgan

Mid LevelNo visa sponsorshipData Engineering

Posted 14 days ago

No clicks

Compensation
Not specified

Currency: Not specified

City
Hyderabad
Country
India

JPMorgan Chase is hiring a Data Engineer II for its CCB-Connected Commerce team to design, develop, and maintain data collection, storage, access, and analytics solutions. The role involves building scalable, secure data pipelines and contributing to data modeling, migration, and system configuration. Candidates should have hands-on experience with JavaSpark/PySpark, AWS, strong SQL skills, and familiarity with NoSQL and domain-driven/microservices patterns.

Location: Hyderabad, Telangana, India

You thrive on diversity and creativity, and we welcome individuals who share our vision of making a lasting impact. Your unique combination of design thinking and experience will help us achieve new heights.

 
As a Data Engineer II at JPMorganChase within the CCB-Connected Commerce space, you are part of an agile team that works to enhance, design, and deliver the data collection, storage, access, and analytics solutions in a secure, stable, and scalable way. As an emerging member of a data engineering team, you execute data solutions through the design, development, and technical troubleshooting of multiple components within a technical product, application, or system, while gaining the skills and experience needed to grow within your role.
 

Job responsibilities

  • Organizes, updates, and maintains gathered data that will aid in making the data actionable
  • Demonstrates basic knowledge of the data system components to determine controls needed to ensure secure data access
  • Be responsible for making custom configuration changes in one to two tools to generate a product at the business or customer request
  • Updates logical or physical data models based on new use cases with minimal supervision
  • Adds to team culture of diversity, opportunity, inclusion, and respect

 

Required qualifications, capabilities, and skills

  • Formal training or certification on software development concepts and 3-plus years applied experience in JavaSpark/PySpark & AWS
  • Proficiency in one or more large-scale data processing distributions such as JavaSpark along with knowledge on Data Pipeline (DPL), Data Modeling, Data warehouse, Data Migration and so-on.
  • Advanced at SQL (e.g., joins and aggregations)
  • Working understanding of NoSQL databases
  • Significant experience with statistical data analysis and ability to determine appropriate tools to perform analysis
  • Basic knowledge of data system components to determine controls needed
  • Good understanding of domain driven design, micro-services patterns, and architecture
Preferred qualifications, capabilities, and skills
 
  • Familiarity with modern front-end technologies
  • Experience designing and building REST API services using Java
  • Exposure to cloud technologies - knowledge on Hybrid cloud architectures is highly desirable.
  • Good to have AWS Certification.
     
Be part of an agile team that works to enhance, design, and deliver data collection, storage, access, and analytics solutions