
Lead Software Engineer – Data Engineer with Pyspark AWS SQL
at J.P. Morgan
Posted 6 days ago
No clicks
- Compensation
- Not specified
- City
- Not specified
- Country
- India
Currency: Not specified
Lead Software Engineer – Data Engineer will drive data integration and analysis across disparate systems, building extensible data acquisition and integration solutions to meet functional and non-functional requirements. You will implement ETL processes to extract, transform, and distribute data across data stores, and develop business intelligence integration designs. Collaborate with product, integration engineering, quality engineering, and system admin teams, and work with geographically distributed teams to deliver scalable data-centric software. Location: Hyderabad, India, within JPMorgan Chase's Technology Division.
Location: Hyderabad, Telangana, India
We have an opportunity to impact your career and provide an adventure where you can push the limits of what's possible.
Job responsibilities :
- Manage data integration and data analysis of disparate systems
- Build extensible data acquisition and integration solutions to meet the functional and non-functional requirements of the client
- Implement processes and logic to extract, transform, and distribute data across one or more data stores from a wide variety of sources
- Provide problem-solving expertise and complex analysis of data to develop business intelligence integration designs
- Interface with other internal product development teams as well as cross functional teams (Product Management, Integration Engineering, Quality Engineering, System Admin Teams)
- Working with remote and geographically distributed teams to enable building the right products, using the right building blocks, and making them consumable by other products easily
Required qualifications, capabilities, and skills:
- Formal training or certification on software engineering concepts and 5+ years applied experience
- Hands on experience in data integration projects using Big Data Technologies
- Hands on experience with a minimum of 5 years on Spark and Scala along with experience on building streaming applications using Kafka
- Strong experience in CICD using Jenkins, Git, Artifactory, Yaml, Maven for Cloud deployments
- Java opensource, API, JUnits, Spring Boot applications, Swagger Setup
- Good knowledge of Big Data querying tools, such as Pig, Hive, and Impala- Strong experience in integrating data from different types of file-storage formats like Parquet, ORC, Avro, Sequence files etc.
- Strong technical understanding in building scalable, high performance distributed services/systems
- Strong knowledge of Data Warehousing and Data Lake concepts ; Possesses strong problem solving, troubleshooting, and analytical skills
- Having excellent communication, presentation, interpersonal, and analytical skills including the ability to communicate complex concepts clearly to different audiences
Experience in technologies like Oracle/SQL and NoSQL data stores such as DynamoDB
Preferred qualifications, capabilities, and skills:
- Experience in AWS Data Warehousing and database platforms preferred
- Ability to quickly learn new technologies in a dynamic environment
- Experience in managing team of at least 5-10 engineers.
- Big data experience preferably related to human resources analytics

