This role involves designing, developing, and maintaining data pipelines and analytics solutions, collaborating with stakeholders to deliver quality data products. The candidate will work extensively with Databricks, PySpark, Python, and RDBMS technologies to optimize ETL processes and ensure data governance.
We are seeking a skilled and motivated Data Engineer to join our data analytics and engineering team to design, develop and lead implementation of data pipeline and analytics solutions. At the role you will: design, develop, and maintain robust data pipelines using Databricks and PySpark. design and develop using any RDBMS technologies like Oracle, SQL Server and etc. write efficient and reusable code in Python for data transformation and automation tasks. develop and optimize complex SQL queries for data extraction, transformation, and loading (ETL). collaborate with product owners, architects, and business stakeholders to understand data needs and deliver high-quality solutions. ensure data quality, integrity, and governance across all data processes. monitor and troubleshoot data workflows and performance issues. document technical solutions and maintain best practices for data engineering.