LOG IN
SIGN UP
Tech Job Finder - Find Software, Technology Sales and Product Manager Jobs.
Sign In
OR continue with e-mail and password
E-mail address
Password
Don't have an account?
Reset password
Join Tech Job Finder
OR continue with e-mail and password
E-mail address
First name
Last name
Username
Password
Confirm Password
How did you hear about us?
By signing up, you agree to our Terms & Conditions and Privacy Policy.

Software Engineer [Multiple Positions Available]

at J.P. Morgan

Back to all Data Engineering jobs
J.P. Morgan logo
Bulge Bracket Investment Banks

Software Engineer [Multiple Positions Available]

at J.P. Morgan

Mid LevelNo visa sponsorshipData Engineering

Posted a month ago

No clicks

Compensation
Not specified

Currency: Not specified

City
Plano
Country
United States

Join a team building scalable data platforms and GenAI solutions for large-scale analytics and AI workloads. You will design, develop, and troubleshoot data pipelines, integrate ML models, and produce architecture and design artifacts while ensuring security and performance. The role focuses on AWS-based, distributed data architectures (real-time and batch), embedding LLM/GenAI services, and automating data quality, cataloging, and monitoring. Strong emphasis on building production-grade, fault-tolerant systems and driving improvements through data-driven insights.

Location: Plano, TX, United States

DESCRIPTION:

Duties: Gather, analyze, synthesize, and develop visualizations and reporting from large, diverse data sets in service of continuous improvement of software applications and systems. Execute software solutions, design, development, and technical troubleshooting with ability to think beyond routine or conventional approaches to build solutions or break down technical problems. Identify hidden problems and patterns in data and use these insights to drive improvements to coding hygiene and system architecture. Generate data models using firm wide tooling, linear algebra, statistics, and geometrical algorithms. Evaluate and report on access control processes to determine effectiveness of data asset security. Develop innovative AI/ML solutions and agentic systems for the LLM Suite platform utilizing public cloud architecture and modern standards, and Al Agentic frameworks. Develop and implement state-of-the-art GenAl services. Produce architecture and design artifacts for complex applications while being accountable for ensuring design constraints are met by software code development.

QUALIFICATIONS:

Minimum education and experience required: Master's degree in Computer Science or related field of study plus 3 years of experience in the job offered or as Software Engineer, Data Engineer, or related occupation. The employer will alternatively accept a Bachelor's degree in Computer Science or related field of study plus 5 years of experience in the job offered or as Software Engineer, Data Engineer, or related occupation.

Skills Required: This position requires three (3) years of experience with the following: building scalable data pipelines, integrating machine learning models using Scikit-Learn and PySpark, optimizing query performance, and leveraging cloud platforms including AWS for data-driven solutions. This position requires two (2) years of experience with the following: designing and developing highly scalable, fault tolerant data architectures on AWS by integrating Amazon S3 with Lake Formation, leveraging AWS Glue, orchestrating workflows with Amazon MWAA and AWS Step Functions, deploying using EMR or ECS and optimizing performance using Apache Iceberg in Athena and Redshift Spectrum, ensuring real-time and batch processing across petabyte-scale datasets. This position requires any amount of experience with the following: provisioning infrastructure, managing ingress/egress traffic, configuring AWS Route 53 to manage domain names, and setting up VPCs using Terraform; implementing hub-and-spoke data modeling approach with satellite and hub tables in Amazon Redshift or Snowflake to ensure efficient historical tracking and auditability; building pipelines in Databricks by leveraging Data Lake, Photon Engine, Unity Catalog, MLflow, Auto Loader, and Workflow and integrating the pipelines with AWS to enable real-time, secure, and cost-efficient big data and AI workloads; executing large-scale, real-time search and analytics solution using Amazon OpenSearch Service, integrating SQS and SNS to enhance Kafka ingestion and ensure efficient data processing; using FastAPI and Spring Boot, Swagger, and Akamai for content delivery and security, and Grafana for monitoring streaming services; engineering automated, Al-powered data catalog and knowledge graph that enables seamless discovery, self-serve analytics by embedding LLM generated metadata, embeddings based indexing and vector search; designing and implementing automated data-quality and anomaly detection using statistical methods to maintain data integrity and detect anomalies in high-velocity data streams.

Job Location: 8181 Communications Pkwy, Plano, TX 75024

Gather, analyze, synthesize, and develop visualizations and reporting from large, diverse data sets...

Software Engineer [Multiple Positions Available]

at J.P. Morgan

Back to all Data Engineering jobs
J.P. Morgan logo
Bulge Bracket Investment Banks

Software Engineer [Multiple Positions Available]

at J.P. Morgan

Mid LevelNo visa sponsorshipData Engineering

Posted a month ago

No clicks

Compensation
Not specified

Currency: Not specified

City
Plano
Country
United States

Join a team building scalable data platforms and GenAI solutions for large-scale analytics and AI workloads. You will design, develop, and troubleshoot data pipelines, integrate ML models, and produce architecture and design artifacts while ensuring security and performance. The role focuses on AWS-based, distributed data architectures (real-time and batch), embedding LLM/GenAI services, and automating data quality, cataloging, and monitoring. Strong emphasis on building production-grade, fault-tolerant systems and driving improvements through data-driven insights.

Location: Plano, TX, United States

DESCRIPTION:

Duties: Gather, analyze, synthesize, and develop visualizations and reporting from large, diverse data sets in service of continuous improvement of software applications and systems. Execute software solutions, design, development, and technical troubleshooting with ability to think beyond routine or conventional approaches to build solutions or break down technical problems. Identify hidden problems and patterns in data and use these insights to drive improvements to coding hygiene and system architecture. Generate data models using firm wide tooling, linear algebra, statistics, and geometrical algorithms. Evaluate and report on access control processes to determine effectiveness of data asset security. Develop innovative AI/ML solutions and agentic systems for the LLM Suite platform utilizing public cloud architecture and modern standards, and Al Agentic frameworks. Develop and implement state-of-the-art GenAl services. Produce architecture and design artifacts for complex applications while being accountable for ensuring design constraints are met by software code development.

QUALIFICATIONS:

Minimum education and experience required: Master's degree in Computer Science or related field of study plus 3 years of experience in the job offered or as Software Engineer, Data Engineer, or related occupation. The employer will alternatively accept a Bachelor's degree in Computer Science or related field of study plus 5 years of experience in the job offered or as Software Engineer, Data Engineer, or related occupation.

Skills Required: This position requires three (3) years of experience with the following: building scalable data pipelines, integrating machine learning models using Scikit-Learn and PySpark, optimizing query performance, and leveraging cloud platforms including AWS for data-driven solutions. This position requires two (2) years of experience with the following: designing and developing highly scalable, fault tolerant data architectures on AWS by integrating Amazon S3 with Lake Formation, leveraging AWS Glue, orchestrating workflows with Amazon MWAA and AWS Step Functions, deploying using EMR or ECS and optimizing performance using Apache Iceberg in Athena and Redshift Spectrum, ensuring real-time and batch processing across petabyte-scale datasets. This position requires any amount of experience with the following: provisioning infrastructure, managing ingress/egress traffic, configuring AWS Route 53 to manage domain names, and setting up VPCs using Terraform; implementing hub-and-spoke data modeling approach with satellite and hub tables in Amazon Redshift or Snowflake to ensure efficient historical tracking and auditability; building pipelines in Databricks by leveraging Data Lake, Photon Engine, Unity Catalog, MLflow, Auto Loader, and Workflow and integrating the pipelines with AWS to enable real-time, secure, and cost-efficient big data and AI workloads; executing large-scale, real-time search and analytics solution using Amazon OpenSearch Service, integrating SQS and SNS to enhance Kafka ingestion and ensure efficient data processing; using FastAPI and Spring Boot, Swagger, and Akamai for content delivery and security, and Grafana for monitoring streaming services; engineering automated, Al-powered data catalog and knowledge graph that enables seamless discovery, self-serve analytics by embedding LLM generated metadata, embeddings based indexing and vector search; designing and implementing automated data-quality and anomaly detection using statistical methods to maintain data integrity and detect anomalies in high-velocity data streams.

Job Location: 8181 Communications Pkwy, Plano, TX 75024

Gather, analyze, synthesize, and develop visualizations and reporting from large, diverse data sets...