
Lead Software Engineer - Python / PySpark / AWS
at J.P. Morgan
Posted 13 days ago
No clicks
- Compensation
- Not specified USD
- City
- Not specified
- Country
- United States
Currency: $ (USD)
Lead Software Engineer position at JPMorgan Chase based in Jersey City, NJ. You will be part of the Commercial & Investment Bank Markets Technology team, delivering secure, scalable software products and leading architectural and data engineering efforts. The role emphasizes Python, PySpark, and AWS with production ETL/ELT pipelines handling large data sets, performance tuning, and data lake design. Mentoring and contributing to engineering communities of practice while advancing technology strategy.
Location: Jersey City, NJ, United States
We have an exciting and rewarding opportunity for you to take your software engineering career to the next level.
As a Lead Software Engineer at JPMorgan Chase within the Commercial & Investment Bank - Markets Technology and Spread Pricing Direct team, you serve as a seasoned member of an agile team to design and deliver trusted market-leading technology products in a secure, stable, and scalable way. You are responsible for carrying out critical technology solutions across multiple technical areas within various business functions in support of the firm’s business objectives.
Job responsibilities
- Executes software solutions, design, development, and technical troubleshooting with ability to think beyond routine or conventional approaches to build solutions or break down technical problems
- Creates secure and high-quality production code and maintains algorithms that run synchronously with appropriate systems
- Produces architecture and design artifacts for complex applications while being accountable for ensuring design constraints are met by software code development
- Gathers, analyzes, synthesizes, and develops visualizations and reporting from large, diverse data sets in service of continuous improvement of software applications and systems
- Proactively identifies hidden problems and patterns in data and uses these insights to drive improvements to coding hygiene and system architecture
- Contributes to software engineering communities of practice and events that explore new and emerging technologies
- Adds to team culture of diversity, equity, inclusion, and respect
Required qualifications, capabilities, and skills
- Formal training or certification on software engineering concepts and 5+ years applied experience.
- Expert-level Python programming with strong software engineering fundamentals (OOP, design patterns, testing)
- Advanced PySpark development including performance tuning, memory management, and optimization techniques
- Deep hands-on experience building production ETL/ELT pipelines processing terabytes to petabytes of data
- Proven expertise with AWS Glue (Spark and Python Shell jobs), Glue Data Catalog, crawlers, and workflows
- Strong experience architecting and implementing data lakes on AWS S3 with multi-zone design (raw/curated/analytics)
- Production experience with Apache Iceberg including schema evolution, partitioning strategies, compaction, and maintenance
- Solid understanding of open table formats (Iceberg) and their trade-offs
- Proficiency with AWS services: S3, IAM, Lake Formation, Athena, EMR, Kinesis, Lambda, Step Functions, CloudWatch
- Experience with SQL optimization, query engines (Trino, Presto, Athena, Spark SQL), and data modeling
- Demonstrated ability to lead and mentor engineering teams, conducting effective code reviews and technical guidance
- Knowledge of Perl
Preferred qualifications, capabilities, and skills
- Knowledge of the financial products (e.g. Fixed Income Securities, Structured products, Derivatives etc.)
- Understanding of NoSQL databases





