
Software Engineer II - Python, PySpark, AWS
at J.P. Morgan
Posted 17 hours ago
No clicks
- Compensation
- Not specified
- City
- Not specified
- Country
- Not specified
Currency: Not specified
Software Engineer II in Corporate Technology designs and delivers secure, scalable data solutions that support data quality, security, and regulatory requirements. The role involves building and maintaining scalable data pipelines using AWS, Python, PySpark, and Big Data frameworks, and architecting cloud-based data storage and analytics solutions. It also covers automating infrastructure with Terraform, developing CI/CD pipelines with Jenkins, and ensuring code quality, data security, and regulatory compliance. Collaboration with data stewards and compliance officers is expected to deliver high-performance, compliant data solutions and optimize cloud workflows.
Location: Hyderabad, Telangana, India
Join us for an exciting opportunity to advance your data engineering career and drive innovation in cloud automation.
Job summary
As a Software Engineer II at JPMorgan Chase within the Corporate Technology team, you design and deliver secure, scalable data solutions that support our organization’s data quality, security, and regulatory requirements.
Job responsibilities
- Designs, develops, and maintains scalable data pipelines using AWS, Python, PySpark, and Big Data frameworks
- Architects and implements AWS cloud-based solutions for secure data storage, processing, and analytics
- Automates infrastructure deployment and management using Terraform
- Develops and maintains CI/CD pipelines using Jenkins for automated testing and deployment
- Collaborates with data stewards and compliance officers to deliver high-performance, compliant data solutions
- Ensures best practices in code quality, data security, privacy, and regulatory compliance
- Troubleshoots and optimizes data workflows and cloud infrastructure
Required qualifications, capabilities, and skills
- Formal training or certification on software engineering concepts and three years applied experience
- Experience in data engineering, cloud automation, or DevOps
- Advanced programming skills in Python for data engineering and automation
- Hands-on experience with Hadoop, Spark, or similar Big Data technologies
- Proficiency in AWS Cloud platforms
- Strong knowledge of Infrastructure as Code using Terraform
- Experience in building and maintaining CI/CD pipelines with Jenkins
Preferred qualifications, capabilities, and skills
- Experience or interest in agent-based AI systems or autonomous agents
- Strong problem-solving and communication skills
- Ability to work independently and as part of a team

