LOG IN
SIGN UP
Tech Job Finder - Find Software, Technology Sales and Product Manager Jobs.
Sign In
OR continue with e-mail and password
E-mail address
Password
Don't have an account?
Reset password
Join Tech Job Finder
OR continue with e-mail and password
E-mail address
First name
Last name
Username
Password
Confirm Password
How did you hear about us?
By signing up, you agree to our Terms & Conditions and Privacy Policy.

Lead Software Engineer - Market Risk

at J.P. Morgan

Back to all Data Engineering jobs
J.P. Morgan logo
Bulge Bracket Investment Banks

Lead Software Engineer - Market Risk

at J.P. Morgan

Tech LeadNo visa sponsorshipData Engineering

Posted a month ago

No clicks

Compensation
Not specified

Currency: Not specified

City
Jersey City
Country
United States

Lead Software Engineer on the Market Risk MXL DataLake team responsible for designing and implementing large-scale historical data stores and high-volume data pipelines. Role focuses on building production-grade PySpark/Spark pipelines, applying analytical data modelling (e.g., Data Vault), and optimizing distributed workloads for performance and cost. You'll collaborate with architects, risk technologists, and product owners to evolve platform standards, ensure regulatory-quality historical analysis, and improve engineering practices.

Location: Jersey City, NJ, United States

We have an opportunity to impact your career and provide an adventure where you can push the limits of what's possible.

As a Lead Software Engineer at JPMorgan Chase within the Market Risk MXL DataLake Team, you will join a strategic initiative building cutting-edge data platforms for market risk and analytics. In this role, you'll design and implement high-volume data pipelines and historical data stores, collaborating closely with architects, risk technologists, and product owners.

Job Responsibilities

  • Design, build, and maintain large-scale historical data stores on modern big-data platforms
  • Develop robust, scalable data pipelines using PySpark / Spark for batch and incremental processing
  • Apply strong data-modelling principles (e.g. dimensional, Data Vault–style, or similar approaches) to support long-term historical analysis and regulatory requirements
  • Engineer high-quality, production-grade code with a focus on correctness, performance, testability, and maintainability
  • Optimize Spark workloads for performance and cost efficiency (partitioning, clustering, file layout, etc.)
  • Collaborate with architects and senior engineers to evolve platform standards, patterns, and best practices
  • Contribute to code reviews, technical design discussions, and continuous improvement of engineering practices

Required qualifications, capabilities and skills

  • Degree-level education in Computer Science, Software Engineering, or a related discipline (or equivalent practical experience)
  • Strong software engineering fundamentals, including data structures, algorithms, and system design
  • Proven experience building large-scale data engineering solutions on big-data platforms
  • Hands-on experience developing PySpark / Spark pipelines in production environments
  • Solid understanding of data modelling for analytical and historical data use cases
  • Experience working with large volumes of structured data over long time horizons
  • Familiarity with distributed systems concepts such as fault tolerance, parallelism, and idempotent processing.

Preferred Qualifications

  • Experience with Databricks, Delta Lake, or similar cloud-based big-data platforms
  • Hands-on experience designing and implementing Data Vault 2.0 models.
  • Exposure to historical / regulatory data platforms, risk data, or financial services
  • Knowledge of append-only data patterns, slowly changing dimensions, or event-driven data models
  • Experience with CI/CD, automated testing, and production monitoring for data pipelines
  • Experience building highly reliable, production-grade risk systems with robust controls and integration with modern SRE tooling.
Software Engineer III - Python, Databrick, AWS, Market Risk

Lead Software Engineer - Market Risk

at J.P. Morgan

Back to all Data Engineering jobs
J.P. Morgan logo
Bulge Bracket Investment Banks

Lead Software Engineer - Market Risk

at J.P. Morgan

Tech LeadNo visa sponsorshipData Engineering

Posted a month ago

No clicks

Compensation
Not specified

Currency: Not specified

City
Jersey City
Country
United States

Lead Software Engineer on the Market Risk MXL DataLake team responsible for designing and implementing large-scale historical data stores and high-volume data pipelines. Role focuses on building production-grade PySpark/Spark pipelines, applying analytical data modelling (e.g., Data Vault), and optimizing distributed workloads for performance and cost. You'll collaborate with architects, risk technologists, and product owners to evolve platform standards, ensure regulatory-quality historical analysis, and improve engineering practices.

Location: Jersey City, NJ, United States

We have an opportunity to impact your career and provide an adventure where you can push the limits of what's possible.

As a Lead Software Engineer at JPMorgan Chase within the Market Risk MXL DataLake Team, you will join a strategic initiative building cutting-edge data platforms for market risk and analytics. In this role, you'll design and implement high-volume data pipelines and historical data stores, collaborating closely with architects, risk technologists, and product owners.

Job Responsibilities

  • Design, build, and maintain large-scale historical data stores on modern big-data platforms
  • Develop robust, scalable data pipelines using PySpark / Spark for batch and incremental processing
  • Apply strong data-modelling principles (e.g. dimensional, Data Vault–style, or similar approaches) to support long-term historical analysis and regulatory requirements
  • Engineer high-quality, production-grade code with a focus on correctness, performance, testability, and maintainability
  • Optimize Spark workloads for performance and cost efficiency (partitioning, clustering, file layout, etc.)
  • Collaborate with architects and senior engineers to evolve platform standards, patterns, and best practices
  • Contribute to code reviews, technical design discussions, and continuous improvement of engineering practices

Required qualifications, capabilities and skills

  • Degree-level education in Computer Science, Software Engineering, or a related discipline (or equivalent practical experience)
  • Strong software engineering fundamentals, including data structures, algorithms, and system design
  • Proven experience building large-scale data engineering solutions on big-data platforms
  • Hands-on experience developing PySpark / Spark pipelines in production environments
  • Solid understanding of data modelling for analytical and historical data use cases
  • Experience working with large volumes of structured data over long time horizons
  • Familiarity with distributed systems concepts such as fault tolerance, parallelism, and idempotent processing.

Preferred Qualifications

  • Experience with Databricks, Delta Lake, or similar cloud-based big-data platforms
  • Hands-on experience designing and implementing Data Vault 2.0 models.
  • Exposure to historical / regulatory data platforms, risk data, or financial services
  • Knowledge of append-only data patterns, slowly changing dimensions, or event-driven data models
  • Experience with CI/CD, automated testing, and production monitoring for data pipelines
  • Experience building highly reliable, production-grade risk systems with robust controls and integration with modern SRE tooling.
Software Engineer III - Python, Databrick, AWS, Market Risk