LOG IN
SIGN UP
Tech Job Finder - Find Software, Technology Sales and Product Manager Jobs.
Sign In
OR continue with e-mail and password
E-mail address
Password
Don't have an account?
Reset password
Join Tech Job Finder
OR continue with e-mail and password
E-mail address
First name
Last name
Username
Password
Confirm Password
How did you hear about us?
By signing up, you agree to our Terms & Conditions and Privacy Policy.

Data Engineer

at Capgemini

Back to all Data Engineering jobs
Capgemini logo
Consultancies

Data Engineer

at Capgemini

Mid LevelNo visa sponsorshipData Engineering

Posted 5 days ago

No clicks

Compensation
Not specified

Currency: Not specified

City
Not specified
Country
Argentina

Capgemini Engineering is seeking a Data Engineer to design, build, and maintain data pipelines and ETL processes using Databricks and Apache Spark. The role involves optimizing data workflows for performance and cost, implementing data lakehouse architectures, ingesting data from multiple sources, and collaborating with data scientists and analysts to enable analytics and ML workloads. You will ensure data quality, governance, and security across assets, monitor Databricks clusters and workflows, and integrate Databricks with cloud services (AWS, Azure, or GCP). The position requires 3+ years of data engineering experience, strong SQL/Python, and familiarity with Delta Lake and DevOps practices for data workflows.

At Capgemini Engineering, the world leader in engineering services, we bring together a global team of engineers, scientists, and architects to help the world’s most innovative companies unleash their potential. From autonomous cars to life-saving robots, our digital and software technology experts think outside the box as they provide unique R&D and engineering services across all industries. Join us for a career full of opportunities. Where you can make a difference. Where no two days are the same.

Key Responsibilities

• Design, build, and maintain data pipelines and ETL processes using Databricks and Apache Spark.

• Optimize data workflows for performance, scalability, and cost efficiency.
• Implement data lakehouse architecture and manage data ingestion from multiple sources.
• Collaborate with data scientists and analysts to enable advanced analytics and machine learning workloads.
• Ensure data quality, governance, and security across all data assets.
• Monitor and troubleshoot Databricks clusters, jobs, and workflows.
• Integrate Databricks with cloud services (AWS, Azure, or GCP) and other enterprise systems.
. Document processes, standards, and best practices for data engineering.

Required Skills & Qualifications

• Bachelor’s degree in Computer Science, Data Engineering, or related field.
• 3+ years of experience in data engineering or big data technologies.
• Hands-on experience with Databricks, Apache Spark, and PySpark.
• Strong knowledge of SQL, Python, and data modeling principles.
• Experience with cloud platforms (AWS, Azure, or GCP) and their data services.
• Familiarity with Delta Lake, Lakehouse architecture, and data governance.
• Understanding of CI/CD pipelines and DevOps practices for data workflows.
• Excellent problem-solving and communication skills.

Works in the area of Software Engineering, which encompasses the development, maintenance and optimization of software solutions/applications.1. Applies scientific methods to analyse and solve software engineering problems.2. He/she is responsible for the development and application of software engineering practice and knowledge, in research, design, development and maintenance.3. His/her work requires the exercise of original thought and judgement and the ability to supervise the technical and administrative work of other software engineers.4. The software engineer builds skills and expertise of his/her software engineering discipline to reach standard software engineer skills expectations for the applicable role, as defined in Professional Communities.5. The software engineer collaborates and acts as team player with other software engineers and stakeholders.

Capgemini is a global business and technology transformation partner, helping organizations to accelerate their dual transition to a digital and sustainable world, while creating tangible impact for enterprises and society. It is a responsible and diverse group of 340,000 team members in more than 50 countries. With its strong over 55-year heritage, Capgemini is trusted by its clients to unlock the value of technology to address the entire breadth of their business needs. It delivers end-to-end services and solutions leveraging strengths from strategy and design to engineering, all fueled by its market leading capabilities in AI, generative AI, cloud and data, combined with its deep industry expertise and partner ecosystem.

Data Engineer

at Capgemini

Back to all Data Engineering jobs
Capgemini logo
Consultancies

Data Engineer

at Capgemini

Mid LevelNo visa sponsorshipData Engineering

Posted 5 days ago

No clicks

Compensation
Not specified

Currency: Not specified

City
Not specified
Country
Argentina

Capgemini Engineering is seeking a Data Engineer to design, build, and maintain data pipelines and ETL processes using Databricks and Apache Spark. The role involves optimizing data workflows for performance and cost, implementing data lakehouse architectures, ingesting data from multiple sources, and collaborating with data scientists and analysts to enable analytics and ML workloads. You will ensure data quality, governance, and security across assets, monitor Databricks clusters and workflows, and integrate Databricks with cloud services (AWS, Azure, or GCP). The position requires 3+ years of data engineering experience, strong SQL/Python, and familiarity with Delta Lake and DevOps practices for data workflows.

At Capgemini Engineering, the world leader in engineering services, we bring together a global team of engineers, scientists, and architects to help the world’s most innovative companies unleash their potential. From autonomous cars to life-saving robots, our digital and software technology experts think outside the box as they provide unique R&D and engineering services across all industries. Join us for a career full of opportunities. Where you can make a difference. Where no two days are the same.

Key Responsibilities

• Design, build, and maintain data pipelines and ETL processes using Databricks and Apache Spark.

• Optimize data workflows for performance, scalability, and cost efficiency.
• Implement data lakehouse architecture and manage data ingestion from multiple sources.
• Collaborate with data scientists and analysts to enable advanced analytics and machine learning workloads.
• Ensure data quality, governance, and security across all data assets.
• Monitor and troubleshoot Databricks clusters, jobs, and workflows.
• Integrate Databricks with cloud services (AWS, Azure, or GCP) and other enterprise systems.
. Document processes, standards, and best practices for data engineering.

Required Skills & Qualifications

• Bachelor’s degree in Computer Science, Data Engineering, or related field.
• 3+ years of experience in data engineering or big data technologies.
• Hands-on experience with Databricks, Apache Spark, and PySpark.
• Strong knowledge of SQL, Python, and data modeling principles.
• Experience with cloud platforms (AWS, Azure, or GCP) and their data services.
• Familiarity with Delta Lake, Lakehouse architecture, and data governance.
• Understanding of CI/CD pipelines and DevOps practices for data workflows.
• Excellent problem-solving and communication skills.

Works in the area of Software Engineering, which encompasses the development, maintenance and optimization of software solutions/applications.1. Applies scientific methods to analyse and solve software engineering problems.2. He/she is responsible for the development and application of software engineering practice and knowledge, in research, design, development and maintenance.3. His/her work requires the exercise of original thought and judgement and the ability to supervise the technical and administrative work of other software engineers.4. The software engineer builds skills and expertise of his/her software engineering discipline to reach standard software engineer skills expectations for the applicable role, as defined in Professional Communities.5. The software engineer collaborates and acts as team player with other software engineers and stakeholders.

Capgemini is a global business and technology transformation partner, helping organizations to accelerate their dual transition to a digital and sustainable world, while creating tangible impact for enterprises and society. It is a responsible and diverse group of 340,000 team members in more than 50 countries. With its strong over 55-year heritage, Capgemini is trusted by its clients to unlock the value of technology to address the entire breadth of their business needs. It delivers end-to-end services and solutions leveraging strengths from strategy and design to engineering, all fueled by its market leading capabilities in AI, generative AI, cloud and data, combined with its deep industry expertise and partner ecosystem.