
Data Software Engineer II
at J.P. Morgan
Posted a month ago
No clicks
- Compensation
- Not specified
- City
- New York City
- Country
- United States
Currency: Not specified
As a Data Software Engineer II on the Consumer & Community Banking Open Banking team you will design and deliver secure, scalable data collection, storage, access, and analytics solutions. You will develop, test, and maintain critical data pipelines and architectures using SQL/NoSQL, Spark, Python or Java and AWS data services (Lake Formation, Glue/EMR, S3, Athena, Kinesis/MSK, etc.). Responsibilities include ETL, data modeling, control reviews, custom tool configuration, and contributing to an agile, diverse team.
Location: New York, NY, United States
Be part of a dynamic team where your distinctive skills will contribute to a winning culture and team.
Job responsibilities
- Supports review of controls to ensure sufficient protection of enterprise data
- Advises and makes custom configuration changes in one to two tools to generate a product at the business or customer request
- Updates logical or physical data models based on new use cases
- Frequently uses SQL and understands NoSQL databases and their niche in the marketplace
- Performing ETL and analysis of data
- Adds to team culture of diversity, opportunity, inclusion, and respect
Required qualifications, capabilities, and skills
- Formal training or certification on software engineering concepts and 2+ years applied experience
- Advanced at SQL (e.g., joins and aggregations)
- Working understanding of NoSQL databases
- Experience customizing changes in a tool to generate product
- Strong organizational, problem-solving, and critical thinking skills; Strong documentation skills
- Proficiency in Python or Java
- Cluster Computing frameworks: Proficiency in Spark
- AWS Data Services: Proficiency in Lake formation, Glue ETL (or) EMR, S3, Glue Catalog, Athena, Kinesis (or) MSK, Airflow (or) Lambda; Step Functions; Event Bridge
- Data De/Serialization: Expertise in at least 2 of the formats: Parquet, Iceberg, AVRO, JSON-LD
- DevOps: Linux Scripting, Jenkins, Git, CI/CD, JIRA, TDD
- AWS Data Security: Good Understanding of security concepts such as: Lake formation, IAM, Service roles, Encryption, KMS, Secrets Manager
Preferred qualifications, capabilities, and skills
- Experience across the data lifecycle
- Significant experience with statistical data analysis and ability to determine appropriate tools and data patterns to perform analysis




