LOG IN
SIGN UP
Tech Job Finder - Find Software, Technology Sales and Product Manager Jobs.
Sign In
OR continue with e-mail and password
E-mail address
Password
Don't have an account?
Reset password
Join Tech Job Finder
OR continue with e-mail and password
E-mail address
Username
Password
Confirm Password
How did you hear about us?
By signing up, you agree to our Terms & Conditions and Privacy Policy.

Software Engineer [Multiple Positions Available]

at J.P. Morgan

Back to all Data Engineering jobs
J.P. Morgan logo
Bulge Bracket Investment Banks

Software Engineer [Multiple Positions Available]

at J.P. Morgan

Mid LevelNo visa sponsorshipData Engineering

Posted 16 hours ago

No clicks

Compensation
Not specified

Currency: Not specified

City
Dallas
Country
United States

Seeking a Software Engineer to design, implement, and automate scalable data transformation and ETL pipelines for production use. The role involves building and optimizing distributed data processing workflows (Spark/Scala/Python/Java), managing large-scale data lake tables, and supporting real-time and batch ingestion. You'll collaborate with internal clients, contribute to platform SDKs/infrastructure, monitor pipeline performance (Grafana/Prometheus/CloudWatch), and mentor junior engineers. Experience with cloud data tooling (Azure Data Factory, Databricks, or AWS services), HBase/Cassandra, Delta/Parquet formats, and REST APIs is required.

Location: Plano, TX, United States

DESCRIPTION:

Duties: Review, understand, code, optimize, and automate existing one-off data transformation pipelines into discrete, scalable tasks. Plan, design, and implement data transformation pipelines and monitor operations of the data platform in a production environment. Collaborate with internal clients and service delivery engineers to identify data needs and intended workflows, and troubleshoot to find workable solutions. Gather, analyze, and document detailed technical requirements to design and implement solutions, and disseminate information to guide other engineers. Contribute code to the underlying infrastructure, software development kits, and platforms being built to support bespoke data transformation pipelines and enable predictive models to be produced and run at scale. Identify engineering opportunities to optimize operational effort and running costs of the data platform. Mentor junior engineering staff and provide guidance on day- to-day code development work.

QUALIFICATIONS:

Minimum education and experience required: Bachelor's degree in Computer Science, Information Technology, Software Engineering, Mathematics, or related field of study plus 5 years of experience in the job offered or as Software Engineer, Data Engineer/Developer, or related occupation.

Skills Required: This position requires 5 years of experience with the following: Designing and implementing scalable ETL pipelines to process structured and semi-structured data. This position requires 3 years of experience with the following: Processing data across distributed environments using Apache Spark on Big Data ecosystems such as Cloudera or Hortonworks; Building distributed data processing workflows using Scala, Python, and Java on Spark; Supporting real-time and batch data ingestion, data cleansing and transformation, and feature extraction on Spark; Managing large-scale data lake tables in Parquet and Avro formats; Implementing low- latency, scalable data operations and supporting real-time lookups, updates, and analytics using Apache HBase and Apache Cassandra. This position requires 2 years of experience with the following: Implementing ACID-compliant data operations and enabling schema evolution using Delta table structures; Implementing partitioning within Hadoop-based architectures; Configuring and maintaining Grafana dashboards integrated with Prometheus, Elasticsearch, or CloudWatch to monitor pipeline performance, API services, and system health in real time; Documenting data workflows, Spring Boot API specifications, CI/CD processes, Grafana configurations, and cloud architecture using Confluence. This position requires 1 year of experience with the following: Creating and deploying RESTful APIs using Spring Boot in Docker containers to deliver processed data access and operational insights; Managing source code to maintain structured development workflows, version control, and team collaboration using Git with GitHub and Bitbucket; Building, deploying, and managing scalable data engineering pipelines and analytics infrastructure using Azure Data Factory, Databricks, or AWS tools such as EC2, S3, EMR, Lambda, Glue, IAM, or CloudWatch.

Job Location: 8181 Communications Pkwy, Plano, TX 70524.

Software Engineer [Multiple Positions Available]

at J.P. Morgan

Back to all Data Engineering jobs
J.P. Morgan logo
Bulge Bracket Investment Banks

Software Engineer [Multiple Positions Available]

at J.P. Morgan

Mid LevelNo visa sponsorshipData Engineering

Posted 16 hours ago

No clicks

Compensation
Not specified

Currency: Not specified

City
Dallas
Country
United States

Seeking a Software Engineer to design, implement, and automate scalable data transformation and ETL pipelines for production use. The role involves building and optimizing distributed data processing workflows (Spark/Scala/Python/Java), managing large-scale data lake tables, and supporting real-time and batch ingestion. You'll collaborate with internal clients, contribute to platform SDKs/infrastructure, monitor pipeline performance (Grafana/Prometheus/CloudWatch), and mentor junior engineers. Experience with cloud data tooling (Azure Data Factory, Databricks, or AWS services), HBase/Cassandra, Delta/Parquet formats, and REST APIs is required.

Location: Plano, TX, United States

DESCRIPTION:

Duties: Review, understand, code, optimize, and automate existing one-off data transformation pipelines into discrete, scalable tasks. Plan, design, and implement data transformation pipelines and monitor operations of the data platform in a production environment. Collaborate with internal clients and service delivery engineers to identify data needs and intended workflows, and troubleshoot to find workable solutions. Gather, analyze, and document detailed technical requirements to design and implement solutions, and disseminate information to guide other engineers. Contribute code to the underlying infrastructure, software development kits, and platforms being built to support bespoke data transformation pipelines and enable predictive models to be produced and run at scale. Identify engineering opportunities to optimize operational effort and running costs of the data platform. Mentor junior engineering staff and provide guidance on day- to-day code development work.

QUALIFICATIONS:

Minimum education and experience required: Bachelor's degree in Computer Science, Information Technology, Software Engineering, Mathematics, or related field of study plus 5 years of experience in the job offered or as Software Engineer, Data Engineer/Developer, or related occupation.

Skills Required: This position requires 5 years of experience with the following: Designing and implementing scalable ETL pipelines to process structured and semi-structured data. This position requires 3 years of experience with the following: Processing data across distributed environments using Apache Spark on Big Data ecosystems such as Cloudera or Hortonworks; Building distributed data processing workflows using Scala, Python, and Java on Spark; Supporting real-time and batch data ingestion, data cleansing and transformation, and feature extraction on Spark; Managing large-scale data lake tables in Parquet and Avro formats; Implementing low- latency, scalable data operations and supporting real-time lookups, updates, and analytics using Apache HBase and Apache Cassandra. This position requires 2 years of experience with the following: Implementing ACID-compliant data operations and enabling schema evolution using Delta table structures; Implementing partitioning within Hadoop-based architectures; Configuring and maintaining Grafana dashboards integrated with Prometheus, Elasticsearch, or CloudWatch to monitor pipeline performance, API services, and system health in real time; Documenting data workflows, Spring Boot API specifications, CI/CD processes, Grafana configurations, and cloud architecture using Confluence. This position requires 1 year of experience with the following: Creating and deploying RESTful APIs using Spring Boot in Docker containers to deliver processed data access and operational insights; Managing source code to maintain structured development workflows, version control, and team collaboration using Git with GitHub and Bitbucket; Building, deploying, and managing scalable data engineering pipelines and analytics infrastructure using Azure Data Factory, Databricks, or AWS tools such as EC2, S3, EMR, Lambda, Glue, IAM, or CloudWatch.

Job Location: 8181 Communications Pkwy, Plano, TX 70524.