
AWS Lead Software Engineer-Full Stack/Java/Spark
at J.P. Morgan
Posted 18 days ago
No clicks
- Compensation
- Not specified
- City
- Wilmington
- Country
- United States
Currency: Not specified
Lead a multi-disciplinary engineering team to design, build, and operate cloud-native applications on AWS using Java and Apache Spark. Partner with product, design, and business stakeholders to translate requirements into scalable, secure solutions while driving architectural decisions and delivery. Mentor engineers, enforce quality gates (CI/CD, automated testing, observability), and own production health and incident response. Modernize legacy systems, implement batch and real-time data processing, and promote engineering best practices.
Location: Wilmington, DE, United States
As an AWS Lead Software Engineer-Full Stack/Java/Spark at JPMorgan Chase within the Corporate Sector's Risk Technology team, you will set the technical direction and orchestrate delivery for a dynamic engineering team. You will partner closely with product, design, and business stakeholders to gather requirements, clarify scope, and translate objectives into high-quality, scalable solutions. You’ll lead by example—innovating through modern engineering practices, elevating team standards, and ensuring predictable delivery from discovery through production support.
Job responsibilities
• Lead a multi-disciplinary agile team, setting clear goals, engineering standards, and delivery plans. Mentor and coach engineers, grow talent, and foster a culture of craftsmanship, ownership, and psychological safety. Drive architectural decisions, design reviews, and technical governance; ensure alignment with long-term platform strategy.
• Facilitate requirement gathering, story mapping, and scope definition with stakeholders; translate business needs into technical roadmaps. Own estimation, dependency management, and risk mitigation; maintain a healthy backlog and release plan. Communicate status and trade-offs clearly to stakeholders; ensure transparency on delivery and quality.
• Design and build new applications with modern, cloud-native architectures; modernize legacy systems for resilience and performance. Implement batch and real-time components following best practices for reliability, security, operational efficiency, cost-effectiveness, and performance. Establish and enforce quality gates: code reviews, automated unit/integration/acceptance tests, and secure-by-default patterns.
• Ensure strong CI/CD discipline, infrastructure-as-code, and automated deployments. Own production health with robust observability (metrics, logs, traces), incident response, and postmortems. Provide and coordinate Level 2 support; reduce toil and mean time to recovery through engineering improvements.
• Champion experimentation and continuous learning; introduce pragmatic innovations that improve developer experience and customer outcomes. Participate and lead agile ceremonies: standups, sprint planning, backlog refinement, demos, retrospectives.
- Contribute to a team culture of diversity, opportunity, inclusion, and respect.
Required qualifications, capabilities, and skills
• Formal training or certification on software engineering concepts and 5+ years applied experience
• Hands-on experience with Apache Spark or similar large-scale data processing engines
• Experience designing, developing, and deploying software on AWS using services such as EC2, EKS, Lambda, S3, RDS, and Aurora
• Strong analytical, troubleshooting, and performance tuning skills across distributed systems
• Proven experience driving end-to-end delivery: requirements, architecture, implementation, testing, deployment, and operations
Preferred qualifications, capabilities, and skills
• AWS certifications (Cloud Practitioner, Developer, or Solutions Architect)
• Experience using Terraform to deliver infrastructure-as-code on public cloud
• Experience in coding Java applications with Spring Boot; experience designing resilient APIs and data models
• Experience in Linux scripting (Bash, KSH) or Python for automation and tooling
• Practical experience with event streaming (e.g., Kafka), relational/NoSQL databases, caching, and messaging
• Experience with CI/CD pipelines and tooling, test automation frameworks, and security best practices
• Familiarity with observability stacks (e.g., Prometheus, Grafana, OpenTelemetry) and SRE principles




