LOG IN
SIGN UP
Tech Job Finder - Find Software, Technology Sales and Product Manager Jobs.
Sign In
OR continue with e-mail and password
E-mail address
Password
Don't have an account?
Reset password
Join Tech Job Finder
OR continue with e-mail and password
E-mail address
First name
Last name
Username
Password
Confirm Password
How did you hear about us?
By signing up, you agree to our Terms & Conditions and Privacy Policy.

Software Engineer [Multiple Positions Available]

at J.P. Morgan

Back to all Data Engineering jobs
J.P. Morgan logo
Bulge Bracket Investment Banks

Software Engineer [Multiple Positions Available]

at J.P. Morgan

Tech LeadNo visa sponsorshipData Engineering

Posted 6 days ago

No clicks

Compensation
Not specified USD

Currency: $ (USD)

City
Not specified
Country
United States

Lead a team of cloud data engineers and architects to deliver scalable, cloud-native data lakes and real-time analytics platforms on AWS, Databricks, and Immuta. Define data engineering best practices, governance, and automation; manage budgeting and FinOps for cloud data platforms; establish observability with SLIs/SLOs and self-healing architectures. Oversee CI/CD and tooling, ML pipeline automation, and security/compliance, while collaborating with data science, security, and DevOps teams to align data solutions with enterprise goals. Location: Plano, TX with up to 10% travel; develop the technical roadmap and data strategy to drive innovation and cost optimization.

Location: Plano, TX, United States

DESCRIPTION:

Duties: Lead a team of cloud data engineers and architects to deliver scalable, cloud-native data lakes and real-time analytics platforms on AWS, Databricks, and Immuta. Drive performance management by setting goals, conducting evaluations, and mentoring team members to enhance their skills. Establish data engineering best practices, including cloud adoption frameworks and automation strategies. Manage budgeting and cost optimization for cloud data platforms using FinOps strategies, AWS Cost Explorer, and right-sizing techniques. Define observability and reliability goals by implementing SLIs, SLOs, and automation for monitoring and self- healing architectures. Lead vendor and tool evaluations to select scalable, cost-effective technologies. Oversee infrastructure planning and governance, ensuring compliance with security and data privacy regulations through ABAC and fine-grained permissions with Immuta. Collaborate with data science, analytics, security, and DevOps teams to align data solutions with enterprise objectives. Lead the development of operational processes for data ingestion, transformation, governance, and consumption. Define the technical roadmap and data strategy to drive innovation and cost optimization. Architect and optimize cloud-based data pipeline using Databricks, Starburst, Iceberg, and Snowflake. Ensure reliability and performance of data processing workflows with Apache Spark, Spark Streaming, Delta Live Tables, and AWS Kinesis. Lead CI/CD and automation initiatives using Jenkins, GitHub Actions, Terraform, and Databricks Asset Bundles. Provide oversight in ML pipeline automation with MLflow and model lifecycle management. Enhance security and compliance with data protection frameworks and automated governance. Optimize federated query execution with Starburst and Starburst Stargate.

QUALIFICATIONS:

Minimum education and experience required: Bachelor's degree in Electrical and Electronic Engineering, Computer Science, Computer Engineering, or related field of study plus seven (7) years of experience in the job offered or as Software Engineer, Architect, Site Reliability Engineer, Applications Support, Oracle Apps Systems Engineer, or related occupation.

Skills Required: This position requires experience with the following: leading the design and development of data-driven applications using Python, PySpark, Shell scripts, and PL/SQL to ensure scalable deployment on the AWS cloud platform; managing cloud infrastructure with Terraform, adhering to Infrastructure as Code (IaC) principles and optimizing AWS resources; driving solutions leveraging AWS services including EC2, EKS (Kubernetes), ECS, S3, EMR, Lake Formation, Glue Catalog, Glue Crawlers, Lambda, and Step Functions to build high-performing data platforms; designing and implementing modern data lake solutions with technologies including Databricks, Immuta, Iceberg, and Snowflake to ensure scalability, security, and governance; developing attribute-based access control (ABAC) using Immuta for fine-grained, policy-driven access across AWS and on-premises data lakes; integrating Databricks, Starburst, and Immuta with Tableau and Alteryx for secure, governed data consumption; creating real-time streaming solutions using orchestration tools, data ingestion frameworks, Delta Live Tables, and Spark Streaming for scalable analytics; designing monitoring and observability solutions with service level indicator (SLI)-based and service level objective (SLO)- based alerting, telemetry collection, and anomaly detection using CloudWatch, CloudTrail, Grafana, OpenTelemetry, and Splunk; shaping AI and ML workflows with MLflow for feature engineering, model training, validation, and deployment to align with business; automating deployment pipelines using Jenkins, GitHub Actions, Databricks Asset Bundles, and Terraform for efficient application rollouts; and troubleshooting and resolving performance, reliability, and scalability issues, collaborating with engineers to implement best practices for distributed systems.

Job Location: 8181 Communications Pkwy, Plano, TX 75024. This position requires up to 10% domestic travel to JPMC facilities.

Lead a team of cloud data engineers and architects to deliver scalable, cloud-native data lakes and real-time analytics platforms...

Software Engineer [Multiple Positions Available]

at J.P. Morgan

Back to all Data Engineering jobs
J.P. Morgan logo
Bulge Bracket Investment Banks

Software Engineer [Multiple Positions Available]

at J.P. Morgan

Tech LeadNo visa sponsorshipData Engineering

Posted 6 days ago

No clicks

Compensation
Not specified USD

Currency: $ (USD)

City
Not specified
Country
United States

Lead a team of cloud data engineers and architects to deliver scalable, cloud-native data lakes and real-time analytics platforms on AWS, Databricks, and Immuta. Define data engineering best practices, governance, and automation; manage budgeting and FinOps for cloud data platforms; establish observability with SLIs/SLOs and self-healing architectures. Oversee CI/CD and tooling, ML pipeline automation, and security/compliance, while collaborating with data science, security, and DevOps teams to align data solutions with enterprise goals. Location: Plano, TX with up to 10% travel; develop the technical roadmap and data strategy to drive innovation and cost optimization.

Location: Plano, TX, United States

DESCRIPTION:

Duties: Lead a team of cloud data engineers and architects to deliver scalable, cloud-native data lakes and real-time analytics platforms on AWS, Databricks, and Immuta. Drive performance management by setting goals, conducting evaluations, and mentoring team members to enhance their skills. Establish data engineering best practices, including cloud adoption frameworks and automation strategies. Manage budgeting and cost optimization for cloud data platforms using FinOps strategies, AWS Cost Explorer, and right-sizing techniques. Define observability and reliability goals by implementing SLIs, SLOs, and automation for monitoring and self- healing architectures. Lead vendor and tool evaluations to select scalable, cost-effective technologies. Oversee infrastructure planning and governance, ensuring compliance with security and data privacy regulations through ABAC and fine-grained permissions with Immuta. Collaborate with data science, analytics, security, and DevOps teams to align data solutions with enterprise objectives. Lead the development of operational processes for data ingestion, transformation, governance, and consumption. Define the technical roadmap and data strategy to drive innovation and cost optimization. Architect and optimize cloud-based data pipeline using Databricks, Starburst, Iceberg, and Snowflake. Ensure reliability and performance of data processing workflows with Apache Spark, Spark Streaming, Delta Live Tables, and AWS Kinesis. Lead CI/CD and automation initiatives using Jenkins, GitHub Actions, Terraform, and Databricks Asset Bundles. Provide oversight in ML pipeline automation with MLflow and model lifecycle management. Enhance security and compliance with data protection frameworks and automated governance. Optimize federated query execution with Starburst and Starburst Stargate.

QUALIFICATIONS:

Minimum education and experience required: Bachelor's degree in Electrical and Electronic Engineering, Computer Science, Computer Engineering, or related field of study plus seven (7) years of experience in the job offered or as Software Engineer, Architect, Site Reliability Engineer, Applications Support, Oracle Apps Systems Engineer, or related occupation.

Skills Required: This position requires experience with the following: leading the design and development of data-driven applications using Python, PySpark, Shell scripts, and PL/SQL to ensure scalable deployment on the AWS cloud platform; managing cloud infrastructure with Terraform, adhering to Infrastructure as Code (IaC) principles and optimizing AWS resources; driving solutions leveraging AWS services including EC2, EKS (Kubernetes), ECS, S3, EMR, Lake Formation, Glue Catalog, Glue Crawlers, Lambda, and Step Functions to build high-performing data platforms; designing and implementing modern data lake solutions with technologies including Databricks, Immuta, Iceberg, and Snowflake to ensure scalability, security, and governance; developing attribute-based access control (ABAC) using Immuta for fine-grained, policy-driven access across AWS and on-premises data lakes; integrating Databricks, Starburst, and Immuta with Tableau and Alteryx for secure, governed data consumption; creating real-time streaming solutions using orchestration tools, data ingestion frameworks, Delta Live Tables, and Spark Streaming for scalable analytics; designing monitoring and observability solutions with service level indicator (SLI)-based and service level objective (SLO)- based alerting, telemetry collection, and anomaly detection using CloudWatch, CloudTrail, Grafana, OpenTelemetry, and Splunk; shaping AI and ML workflows with MLflow for feature engineering, model training, validation, and deployment to align with business; automating deployment pipelines using Jenkins, GitHub Actions, Databricks Asset Bundles, and Terraform for efficient application rollouts; and troubleshooting and resolving performance, reliability, and scalability issues, collaborating with engineers to implement best practices for distributed systems.

Job Location: 8181 Communications Pkwy, Plano, TX 75024. This position requires up to 10% domestic travel to JPMC facilities.

Lead a team of cloud data engineers and architects to deliver scalable, cloud-native data lakes and real-time analytics platforms...

SIMILAR OPPORTUNITIES

No similar jobs available at the moment.