LOG IN
SIGN UP
Tech Job Finder - Find Software, Technology Sales and Product Manager Jobs.
Sign In
OR continue with e-mail and password
E-mail address
Password
Don't have an account?
Reset password
Join Tech Job Finder
OR continue with e-mail and password
E-mail address
First name
Last name
Username
Password
Confirm Password
How did you hear about us?
By signing up, you agree to our Terms & Conditions and Privacy Policy.

ML Engineer

at Capgemini

Back to all Data Engineering jobs
Capgemini logo
Consultancies

ML Engineer

at Capgemini

Mid LevelNo visa sponsorshipData Engineering

Posted 6 days ago

No clicks

Compensation
Not specified

Currency: Not specified

City
Bengaluru
Country
India

Senior Data Engineer (Databricks & PySpark Specialist) Location: Pune/Bangalore with 6+ years of experience. You will design and optimize data workflows leveraging Databricks, PySpark, and modern cloud technologies. You will build and manage scalable data pipelines, orchestrate jobs, and configure Databricks clusters; develop ETL and data transformation workflows using PySpark; implement GitOps for version control and deployment; and contribute to CI/CD for ML workflows. Nice to have: Azure ML services, infrastructure automation (Bicep/CloudFormation).

Job Description

Senior Data Engineer (Databricks & PySpark Specialist)Location: Pune / Bangalore
Experience: 6+ years
Start Date: As soon as possibleJoin us to design and optimize data workflows leveraging Databricks, PySpark, and modern cloud technologies.Your Role
As a Senior Data Engineer, you will work on building and managing scalable data pipelines, orchestrating jobs, and configuring clusters in Databricks. You’ll collaborate with cross-functional teams to ensure efficient ETL processes, implement GitOps practices, and contribute to automation and CI/CD for ML workflows.In this role, you will:Configure and manage Databricks clusters, pipelines, and job orchestration.Develop ETL and data transformation workflows using PySpark.Implement GitOps principles for version control and deployment.Collaborate with teams to integrate data solutions into ML workflows.Optimize performance and ensure reliability of data processes.Your Profile6+ years of experience in data engineering.Strong hands-on experience with Databricks (cluster setup, pipelines, orchestration).Proficiency in PySpark for ETL and data transformations.Understanding of GitOps practices.Nice to have:Experience building CI/CD pipelines for ML workflows.Working knowledge of Azure ML services (model registry, jobs, batch endpoints).Familiarity with infrastructure automation using Bicep or CloudFormation.Key Skills
Databricks | PySpark | ETL | GitOps | CI/CD | Azure ML | Infrastructure Automation (Bicep/CloudFormation)

ML Engineer

at Capgemini

Back to all Data Engineering jobs
Capgemini logo
Consultancies

ML Engineer

at Capgemini

Mid LevelNo visa sponsorshipData Engineering

Posted 6 days ago

No clicks

Compensation
Not specified

Currency: Not specified

City
Bengaluru
Country
India

Senior Data Engineer (Databricks & PySpark Specialist) Location: Pune/Bangalore with 6+ years of experience. You will design and optimize data workflows leveraging Databricks, PySpark, and modern cloud technologies. You will build and manage scalable data pipelines, orchestrate jobs, and configure Databricks clusters; develop ETL and data transformation workflows using PySpark; implement GitOps for version control and deployment; and contribute to CI/CD for ML workflows. Nice to have: Azure ML services, infrastructure automation (Bicep/CloudFormation).

Job Description

Senior Data Engineer (Databricks & PySpark Specialist)Location: Pune / Bangalore
Experience: 6+ years
Start Date: As soon as possibleJoin us to design and optimize data workflows leveraging Databricks, PySpark, and modern cloud technologies.Your Role
As a Senior Data Engineer, you will work on building and managing scalable data pipelines, orchestrating jobs, and configuring clusters in Databricks. You’ll collaborate with cross-functional teams to ensure efficient ETL processes, implement GitOps practices, and contribute to automation and CI/CD for ML workflows.In this role, you will:Configure and manage Databricks clusters, pipelines, and job orchestration.Develop ETL and data transformation workflows using PySpark.Implement GitOps principles for version control and deployment.Collaborate with teams to integrate data solutions into ML workflows.Optimize performance and ensure reliability of data processes.Your Profile6+ years of experience in data engineering.Strong hands-on experience with Databricks (cluster setup, pipelines, orchestration).Proficiency in PySpark for ETL and data transformations.Understanding of GitOps practices.Nice to have:Experience building CI/CD pipelines for ML workflows.Working knowledge of Azure ML services (model registry, jobs, batch endpoints).Familiarity with infrastructure automation using Bicep or CloudFormation.Key Skills
Databricks | PySpark | ETL | GitOps | CI/CD | Azure ML | Infrastructure Automation (Bicep/CloudFormation)

SIMILAR OPPORTUNITIES

No similar jobs available at the moment.