LOG IN
SIGN UP
Tech Job Finder - Find Software, Technology Sales and Product Manager Jobs.
Sign In
OR continue with e-mail and password
E-mail address
Password
Don't have an account?
Reset password
Join Tech Job Finder
OR continue with e-mail and password
E-mail address
First name
Last name
Username
Password
Confirm Password
How did you hear about us?
By signing up, you agree to our Terms & Conditions and Privacy Policy.

Python Data Engineer - Engineer Intmd Analyst - C11 - CHENNAI

at Citi

Back to all Data Engineering jobs
Citi logo
Industry not specified

Python Data Engineer - Engineer Intmd Analyst - C11 - CHENNAI

at Citi

Mid LevelNo visa sponsorshipData Engineering

Posted 12 hours ago

No clicks

Compensation
Not specified

Currency: Not specified

City
Chennai
Country
India

An intermediate data engineering role focusing on design, development, and deployment of data pipelines and infrastructure. Responsible for building scalable ETL/ELT pipelines with Apache Spark (Python/Scala), writing and optimizing SQL, and developing data models across data warehouses and data lakes. Ensures data quality, governance and performance monitoring; collaborates with data scientists, BI developers, and application teams. Participates in code reviews and contributes to documentation and best practices.

Python Data Engineer - Engineer Intmd Analyst - C11 - CHENNAI

Job Req Id:
26942483
Location(s):
Chennai, Tamil Nadu, India
Job Type:
Hybrid
Posted:
März. 05, 2026

Discover your future at Citi

Working at Citi is far more than just a job. A career with us means joining a team of more than 230,000 dedicated people from around the globe. At Citi, you’ll have the opportunity to grow your career, give back to your community and make a real impact.

Job Overview

The Engineer Intmd Analyst is an intermediate level position responsible for a variety of engineering activities including the design, acquisition and development of software and infrastructure in coordination with the Technology team. The overall objective of this role is to ensure quality standards are being met within existing and planned frameworks.

Responsibilities:

  • Design, develop, and optimize scalable data pipelines and ETL/ELT processes using Apache Spark (preferably with Scala or Python) to ingest, transform, and load large datasets from diverse sources.
  • Write, optimize, and troubleshoot complex SQL queries, stored procedures, and functions for data extraction, transformation, and reporting within relational and analytical databases.
  • Develop and maintain data models, schema definitions, and database objects in various data storage solutions (e.g., data warehouses, data lakes, operational databases).
  • Ensure data quality, integrity, accuracy, and consistency across all data assets through robust validation and monitoring mechanisms.
  • Collaborate closely with data scientists, data analysts, business intelligence developers, and application teams to understand data requirements and deliver appropriate data solutions.
  • Monitor data pipeline performance, identify bottlenecks, and implement optimizations to improve efficiency and reduce processing times.
  • Manage data lifecycle, including data archival, retention, and compliance with data governance policies and security standards.
  • Participate in code reviews, contribute to documentation, and adhere to engineering best practices.
  • Troubleshoot and resolve data-related issues in production environments.
  • Contribute to the evaluation and selection of new data technologies and tools.

Qualifications:

  • Experience: 5+ years of professional experience in data engineering, backend development with a strong data focus, or a related field.
  • Data Acumen: Strong understanding of data warehousing concepts, dimensional modeling, and data lake architectures.
  • Problem-Solving: Excellent analytical and problem-solving skills, with a keen attention to detail.
  • Communication: Good verbal and written communication skills, with the ability to articulate technical concepts to both technical and non-technical audiences.
  • Teamwork: Ability to work effectively in a collaborative team environment and contribute positively to team goals.
  • Agile: Experience working in an Agile/Scrum development methodology.

Education:

  • Bachelor’s degree/University degree or equivalent experience

Technical Skills

  • Big Data Processing: Strong proficiency with Apache Spark (DataFrames API, Spark SQL) using Scala or Python.
  • Databases: Expert-level SQL skills. Extensive experience with relational databases (e.g., PostgreSQL, Oracle, SQL Server, MySQL) and experience with cloud-native data warehouses (e.g., Snowflake, Google BigQuery, AWS Redshift) or data lake technologies (e.g., Delta Lake).
  • Programming Languages: Strong proficiency in Python or Scala.
  • ETL/ELT Tools: Experience with ETL/ELT methodologies and tools, including data orchestration tools (e.g., Apache Airflow, Azure Data Factory, AWS Step Functions, GCP Cloud Composer).
  • Cloud Platforms: Exposure to major cloud platforms (AWS, Azure, GCP) and their data services (e.g., S3, ADLS, GCS, EC2, Azure VMs, Kubernetes).
  • Version Control: Proficiency with Git and standard version control workflows.
  • Data Modeling: Experience in designing and implementing efficient and scalable data models.
  • Performance Tuning: Ability to optimize Spark jobs, SQL queries, and database performance.
  • Linux/Unix: Familiarity with Linux/Unix environments for scripting and job execution.

------------------------------------------------------

Job Family Group:

Technology

------------------------------------------------------

Job Family:

Systems & Engineering

------------------------------------------------------

Time Type:

Full time

------------------------------------------------------

Most Relevant Skills

Please see the requirements listed above.

------------------------------------------------------

Other Relevant Skills

For complementary skills, please see above and/or contact the recruiter.

------------------------------------------------------

Citi is an equal opportunity employer, and qualified candidates will receive consideration without regard to their race, color, religion, sex, sexual orientation, gender identity, national origin, disability, status as a protected veteran, or any other characteristic protected by law.

If you are a person with a disability and need a reasonable accommodation to use our search tools and/or apply for a career opportunity review Accessibility at Citi.

View Citi’s EEO Policy Statement and the Know Your Rights poster.

Python Data Engineer - Engineer Intmd Analyst - C11 - CHENNAI

at Citi

Back to all Data Engineering jobs
Citi logo
Industry not specified

Python Data Engineer - Engineer Intmd Analyst - C11 - CHENNAI

at Citi

Mid LevelNo visa sponsorshipData Engineering

Posted 12 hours ago

No clicks

Compensation
Not specified

Currency: Not specified

City
Chennai
Country
India

An intermediate data engineering role focusing on design, development, and deployment of data pipelines and infrastructure. Responsible for building scalable ETL/ELT pipelines with Apache Spark (Python/Scala), writing and optimizing SQL, and developing data models across data warehouses and data lakes. Ensures data quality, governance and performance monitoring; collaborates with data scientists, BI developers, and application teams. Participates in code reviews and contributes to documentation and best practices.

Python Data Engineer - Engineer Intmd Analyst - C11 - CHENNAI

Job Req Id:
26942483
Location(s):
Chennai, Tamil Nadu, India
Job Type:
Hybrid
Posted:
März. 05, 2026

Discover your future at Citi

Working at Citi is far more than just a job. A career with us means joining a team of more than 230,000 dedicated people from around the globe. At Citi, you’ll have the opportunity to grow your career, give back to your community and make a real impact.

Job Overview

The Engineer Intmd Analyst is an intermediate level position responsible for a variety of engineering activities including the design, acquisition and development of software and infrastructure in coordination with the Technology team. The overall objective of this role is to ensure quality standards are being met within existing and planned frameworks.

Responsibilities:

  • Design, develop, and optimize scalable data pipelines and ETL/ELT processes using Apache Spark (preferably with Scala or Python) to ingest, transform, and load large datasets from diverse sources.
  • Write, optimize, and troubleshoot complex SQL queries, stored procedures, and functions for data extraction, transformation, and reporting within relational and analytical databases.
  • Develop and maintain data models, schema definitions, and database objects in various data storage solutions (e.g., data warehouses, data lakes, operational databases).
  • Ensure data quality, integrity, accuracy, and consistency across all data assets through robust validation and monitoring mechanisms.
  • Collaborate closely with data scientists, data analysts, business intelligence developers, and application teams to understand data requirements and deliver appropriate data solutions.
  • Monitor data pipeline performance, identify bottlenecks, and implement optimizations to improve efficiency and reduce processing times.
  • Manage data lifecycle, including data archival, retention, and compliance with data governance policies and security standards.
  • Participate in code reviews, contribute to documentation, and adhere to engineering best practices.
  • Troubleshoot and resolve data-related issues in production environments.
  • Contribute to the evaluation and selection of new data technologies and tools.

Qualifications:

  • Experience: 5+ years of professional experience in data engineering, backend development with a strong data focus, or a related field.
  • Data Acumen: Strong understanding of data warehousing concepts, dimensional modeling, and data lake architectures.
  • Problem-Solving: Excellent analytical and problem-solving skills, with a keen attention to detail.
  • Communication: Good verbal and written communication skills, with the ability to articulate technical concepts to both technical and non-technical audiences.
  • Teamwork: Ability to work effectively in a collaborative team environment and contribute positively to team goals.
  • Agile: Experience working in an Agile/Scrum development methodology.

Education:

  • Bachelor’s degree/University degree or equivalent experience

Technical Skills

  • Big Data Processing: Strong proficiency with Apache Spark (DataFrames API, Spark SQL) using Scala or Python.
  • Databases: Expert-level SQL skills. Extensive experience with relational databases (e.g., PostgreSQL, Oracle, SQL Server, MySQL) and experience with cloud-native data warehouses (e.g., Snowflake, Google BigQuery, AWS Redshift) or data lake technologies (e.g., Delta Lake).
  • Programming Languages: Strong proficiency in Python or Scala.
  • ETL/ELT Tools: Experience with ETL/ELT methodologies and tools, including data orchestration tools (e.g., Apache Airflow, Azure Data Factory, AWS Step Functions, GCP Cloud Composer).
  • Cloud Platforms: Exposure to major cloud platforms (AWS, Azure, GCP) and their data services (e.g., S3, ADLS, GCS, EC2, Azure VMs, Kubernetes).
  • Version Control: Proficiency with Git and standard version control workflows.
  • Data Modeling: Experience in designing and implementing efficient and scalable data models.
  • Performance Tuning: Ability to optimize Spark jobs, SQL queries, and database performance.
  • Linux/Unix: Familiarity with Linux/Unix environments for scripting and job execution.

------------------------------------------------------

Job Family Group:

Technology

------------------------------------------------------

Job Family:

Systems & Engineering

------------------------------------------------------

Time Type:

Full time

------------------------------------------------------

Most Relevant Skills

Please see the requirements listed above.

------------------------------------------------------

Other Relevant Skills

For complementary skills, please see above and/or contact the recruiter.

------------------------------------------------------

Citi is an equal opportunity employer, and qualified candidates will receive consideration without regard to their race, color, religion, sex, sexual orientation, gender identity, national origin, disability, status as a protected veteran, or any other characteristic protected by law.

If you are a person with a disability and need a reasonable accommodation to use our search tools and/or apply for a career opportunity review Accessibility at Citi.

View Citi’s EEO Policy Statement and the Know Your Rights poster.

SIMILAR OPPORTUNITIES

No similar jobs available at the moment.