LOG IN
SIGN UP
Tech Job Finder - Find Software, Technology Sales and Product Manager Jobs.
Sign In
OR continue with e-mail and password
E-mail address
Password
Don't have an account?
Reset password
Join Tech Job Finder
OR continue with e-mail and password
E-mail address
First name
Last name
Username
Password
Confirm Password
How did you hear about us?
By signing up, you agree to our Terms & Conditions and Privacy Policy.

Specialist Software Engineer - Python, Elastic Search., Druid

at Societe Generale

Back to all Data Engineering jobs
Societe Generale logo
Investment Banking

Specialist Software Engineer - Python, Elastic Search., Druid

at Societe Generale

Mid LevelNo visa sponsorshipData Engineering

Posted 2 days ago

No clicks

Compensation
Not specified

Currency: Not specified

City
Bengaluru
Country
India

We are seeking a highly skilled Specialist Software Engineer with deep expertise in Big Data technologies, data pipeline orchestration, and observability tooling. The role is responsible for designing, developing, and maintaining scalable data processing systems and integrating observability solutions to ensure system reliability and performance. The candidate will design robust data pipelines using Apache Kafka, Apache NiFi, Apache Spark, Hadoop/HDFS, Druid, and ElasticSearch, and will integrate data visualization and monitoring tools such as Kibana, Grafana, and Logstash. Strong Python 3 and Shell scripting skills, plus GitHub Actions and collaboration with cross-functional teams, are essential for success.

{"hiringOrganization":{"logo":"https:\/\/careers.societegenerale.com\/themes\/custom\/sg_careers\/images\/LOGO_Groupe_EN.jpg","@type":"Organization","name":"Societe Generale","sameAs":"https:\/\/careers.societegenerale.com"},"employmentType":"Permanent contract","validThrough":"2025\/12\/29","datePosted":"2025\/12\/21","title":"Specialist Software Engineer - Python, Elastic Search., Druid - IT (Information Technology) - Bangalore, India","@context":"http:\/\/schema.org\/","@type":"JobPosting","description":"ResponsibilitiesJ<\/strong>ob Summary:<\/strong>

We are seeking a highly skilled and motivated Specialist Software Engineer<\/strong> with deep expertise in Big Data technologies<\/strong>, data pipeline orchestration<\/strong>, and observability tooling<\/strong>. The ideal candidate will be responsible for designing, developing, and maintaining scalable data processing systems and integrating observability solutions to ensure system reliability and performance.<\/p>Key Responsibilities:<\/strong>Big Data Engineering:<\/strong>

  • Design and implement robust data pipelines using Apache Kafka<\/strong>, Apache NiFi<\/strong>, Apache Spark<\/strong>, and Sqoop<\/strong>.<\/li>
  • Manage and optimize distributed data storage systems including Hadoop<\/strong>, HDFS<\/strong>, Druid<\/strong>, and ElasticSearch<\/strong>.<\/li>
  • Integrate and maintain data visualization and monitoring tools like Kibana<\/strong>, Grafana<\/strong>, and Logstash<\/strong>.<\/li>
  • Ensure efficient data ingestion, transformation, and delivery across various platforms.<\/li><\/ul>Programming & Scripting:<\/strong>
    • Develop automation scripts and data processing utilities using Python 3<\/strong> and Shell scripting<\/strong>.<\/li>
    • Build reusable components and libraries for data manipulation and system integration.<\/li><\/ul>Observability & Monitoring:<\/strong>
      • Implement and configure observability agents such as Fluentd<\/strong>, Telegraf<\/strong>, and Logstash<\/strong>.<\/li>
      • Collaborate with platform teams to integrate OpenTelemetry<\/strong> for distributed tracing and metrics collection (good to have).<\/li>
      • Maintain dashboards and alerts for system health and performance monitoring.<\/li><\/ul>DevOps & CI\/CD:<\/strong>
        • Contribute to CI\/CD pipeline development using GitHub Actions<\/strong>.<\/li>
        • Collaborate with DevOps teams to ensure seamless deployment and integration of data services.<\/li><\/ul>Collaboration & Documentation:<\/strong>
          • Work closely with cross-functional teams including data scientists, platform engineers, and product managers.<\/li>
          • Document system architecture, data flows, and operational procedures.<\/li>
          • Participate in code reviews, knowledge sharing sessions, and technical mentoring.<\/li><\/ul>Required Skills & Qualifications:<\/strong>
            • Bachelor\u2019s or Master\u2019s degree in Computer Science, Information Technology, or related field.<\/li>
            • 5+ years of experience in Big Data engineering and scripting.<\/li>
            • Strong hands-on experience with:
              • Kafka, NiFi, Hadoop, HDFS, Spark, Sqoop<\/strong><\/li>
              • ElasticSearch, Druid, Kibana, Grafana<\/strong><\/li>
              • Python3, Shell scripting<\/strong><\/li>
              • Logstash, Fluentd, Telegraf<\/strong><\/li><\/ul><\/li>
              • Familiarity with GitHub Actions<\/strong> and basic DevOps practices.<\/li>
              • Exposure to OpenTelemetry<\/strong> is a plus.<\/li>
              • Excellent problem-solving, analytical, and communication skills.<\/li><\/ul>Preferred Qualifications:<\/strong>
                • Experience in building real-time data streaming applications.<\/li>
                • Knowledge of data governance, security, and compliance in Big Data environments.<\/li>
                • Certifications in Big Data technologies or cloud platforms (AWS\/GCP\/Azure) are a plus.<\/li><\/ul>Profile requiredJ<\/strong>ob Summary:<\/strong>

                  We are seeking a highly skilled and motivated Specialist Software Engineer<\/strong> with deep expertise in Big Data technologies<\/strong>, data pipeline orchestration<\/strong>, and observability tooling<\/strong>. The ideal candidate will be responsible for designing, developing, and maintaining scalable data processing systems and integrating observability solutions to ensure system reliability and performance.<\/p>Key Responsibilities:<\/strong>Big Data Engineering:<\/strong>

                  • Design and implement robust data pipelines using Apache Kafka<\/strong>, Apache NiFi<\/strong>, Apache Spark<\/strong>, and Sqoop<\/strong>.<\/li>
                  • Manage and optimize distributed data storage systems including Hadoop<\/strong>, HDFS<\/strong>, Druid<\/strong>, and ElasticSearch<\/strong>.<\/li>
                  • Integrate and maintain data visualization and monitoring tools like Kibana<\/strong>, Grafana<\/strong>, and Logstash<\/strong>.<\/li>
                  • Ensure efficient data ingestion, transformation, and delivery across various platforms.<\/li><\/ul>Programming & Scripting:<\/strong>
                    • Develop automation scripts and data processing utilities using Python 3<\/strong> and Shell scripting<\/strong>.<\/li>
                    • Build reusable components and libraries for data manipulation and system integration.<\/li><\/ul>Observability & Monitoring:<\/strong>
                      • Implement and configure observability agents such as Fluentd<\/strong>, Telegraf<\/strong>, and Logstash<\/strong>.<\/li>
                      • Collaborate with platform teams to integrate OpenTelemetry<\/strong> for distributed tracing and metrics collection (good to have).<\/li>
                      • Maintain dashboards and alerts for system health and performance monitoring.<\/li><\/ul>DevOps & CI\/CD:<\/strong>
                        • Contribute to CI\/CD pipeline development using GitHub Actions<\/strong>.<\/li>
                        • Collaborate with DevOps teams to ensure seamless deployment and integration of data services.<\/li><\/ul>Collaboration & Documentation:<\/strong>
                          • Work closely with cross-functional teams including data scientists, platform engineers, and product managers.<\/li>
                          • Document system architecture, data flows, and operational procedures.<\/li>
                          • Participate in code reviews, knowledge sharing sessions, and technical mentoring.<\/li><\/ul>Required Skills & Qualifications:<\/strong>
                            • Bachelor\u2019s or Master\u2019s degree in Computer Science, Information Technology, or related field.<\/li>
                            • 5+ years of experience in Big Data engineering and scripting.<\/li>
                            • Strong hands-on experience with:
                              • Kafka, NiFi, Hadoop, HDFS, Spark, Sqoop<\/strong><\/li>
                              • ElasticSearch, Druid, Kibana, Grafana<\/strong><\/li>
                              • Python3, Shell scripting<\/strong><\/li>
                              • Logstash, Fluentd, Telegraf<\/strong><\/li><\/ul><\/li>
                              • Familiarity with GitHub Actions<\/strong> and basic DevOps practices.<\/li>
                              • Exposure to OpenTelemetry<\/strong> is a plus.<\/li>
                              • Excellent problem-solving, analytical, and communication skills.<\/li><\/ul>Preferred Qualifications:<\/strong>
                                • Experience in building real-time data streaming applications.<\/li>
                                • Knowledge of data governance, security, and compliance in Big Data environments.<\/li>
                                • Certifications in Big Data technologies or cloud platforms (AWS\/GCP\/Azure) are a plus.<\/li><\/ul>Why join us

                                  We are committed to support accelerating our Group\u2019s ESG strategy by implementing ESG principles in all our activities and policies. They are translated in our business activity (ESG assessment, reporting, project management or IT activities), our work environment and in our responsible practices for environment protection.<\/strong><\/p>Business insight

                                  At Societe Generale, we are convinced that people are drivers of change, and that the world of tomorrow will be shaped by all their initiatives, from the smallest to the most ambitious.<\/p>

                                  Whether you\u2019re joining us for a period of months, years or your entire career, together we can have a positive impact on the future. Creating, daring, innovating and taking action are part of our DNA.<\/p>

                                  If you too want to be directly involved, grow in a stimulating and caring environment, feel useful on a daily basis and develop or strengthen your expertise, you will feel right at home with us!<\/p>

                                  Still hesitating?<\/p>

                                  You should know that our employees can dedicate several days per year to solidarity actions during their working hours, including sponsoring people struggling with their orientation or professional integration, participating in the financial education of young apprentices, and sharing their skills with charities. There are many ways to get involved.<\/p>","identifier":{"@type":"PropertyValue","name":"Recruitment Societe Generale","value":"25000M6Z"},"jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Bangalore","addressCountry":"India"}}} window.dataLayer = window.dataLayer || []; var aData = { customVarPage1: "Specialist Software Engineer - Python, Elastic Search., Druid", customVarPage2: "Bangalore", customVarPage3: "Permanent contract", customVarPage4: "25000M6Z", customVarPage5: "SG Global Solution Centre", customVarPage6: "IT (Information Technology)", customVarPage7: "2025/12/21" } window.dataLayer.push(aData);

                                  Back to offers

                                  Specialist Software Engineer - Python, Elastic Search., Druid

                                  IT (Information Technology)
                                  Apply
                                  Add to favorites
                                  Permanent contract
                                  Bangalore, India
                                  Hybrid
                                  Reference 25000M6Z
                                  Start date Immediately
                                  Publication date 2025/12/21

                                  Responsibilities

                                  Job Summary:

                                  We are seeking a highly skilled and motivated Specialist Software Engineer with deep expertise in Big Data technologies, data pipeline orchestration, and observability tooling. The ideal candidate will be responsible for designing, developing, and maintaining scalable data processing systems and integrating observability solutions to ensure system reliability and performance.

                                  Key Responsibilities:Big Data Engineering:
                                  • Design and implement robust data pipelines using Apache Kafka, Apache NiFi, Apache Spark, and Sqoop.
                                  • Manage and optimize distributed data storage systems including Hadoop, HDFS, Druid, and ElasticSearch.
                                  • Integrate and maintain data visualization and monitoring tools like Kibana, Grafana, and Logstash.
                                  • Ensure efficient data ingestion, transformation, and delivery across various platforms.
                                  Programming & Scripting:
                                  • Develop automation scripts and data processing utilities using Python 3 and Shell scripting.
                                  • Build reusable components and libraries for data manipulation and system integration.
                                  Observability & Monitoring:
                                  • Implement and configure observability agents such as Fluentd, Telegraf, and Logstash.
                                  • Collaborate with platform teams to integrate OpenTelemetry for distributed tracing and metrics collection (good to have).
                                  • Maintain dashboards and alerts for system health and performance monitoring.
                                  DevOps & CI/CD:
                                  • Contribute to CI/CD pipeline development using GitHub Actions.
                                  • Collaborate with DevOps teams to ensure seamless deployment and integration of data services.
                                  Collaboration & Documentation:
                                  • Work closely with cross-functional teams including data scientists, platform engineers, and product managers.
                                  • Document system architecture, data flows, and operational procedures.
                                  • Participate in code reviews, knowledge sharing sessions, and technical mentoring.
                                  Required Skills & Qualifications:
                                  • Bachelor’s or Master’s degree in Computer Science, Information Technology, or related field.
                                  • 5+ years of experience in Big Data engineering and scripting.
                                  • Strong hands-on experience with:
                                    • Kafka, NiFi, Hadoop, HDFS, Spark, Sqoop
                                    • ElasticSearch, Druid, Kibana, Grafana
                                    • Python3, Shell scripting
                                    • Logstash, Fluentd, Telegraf
                                  • Familiarity with GitHub Actions and basic DevOps practices.
                                  • Exposure to OpenTelemetry is a plus.
                                  • Excellent problem-solving, analytical, and communication skills.
                                  Preferred Qualifications:
                                  • Experience in building real-time data streaming applications.
                                  • Knowledge of data governance, security, and compliance in Big Data environments.
                                  • Certifications in Big Data technologies or cloud platforms (AWS/GCP/Azure) are a plus.

                                  Profile required

                                  Job Summary:

                                  We are seeking a highly skilled and motivated Specialist Software Engineer with deep expertise in Big Data technologies, data pipeline orchestration, and observability tooling. The ideal candidate will be responsible for designing, developing, and maintaining scalable data processing systems and integrating observability solutions to ensure system reliability and performance.

                                  Key Responsibilities:Big Data Engineering:
                                  • Design and implement robust data pipelines using Apache Kafka, Apache NiFi, Apache Spark, and Sqoop.
                                  • Manage and optimize distributed data storage systems including Hadoop, HDFS, Druid, and ElasticSearch.
                                  • Integrate and maintain data visualization and monitoring tools like Kibana, Grafana, and Logstash.
                                  • Ensure efficient data ingestion, transformation, and delivery across various platforms.
                                  Programming & Scripting:
                                  • Develop automation scripts and data processing utilities using Python 3 and Shell scripting.
                                  • Build reusable components and libraries for data manipulation and system integration.
                                  Observability & Monitoring:
                                  • Implement and configure observability agents such as Fluentd, Telegraf, and Logstash.
                                  • Collaborate with platform teams to integrate OpenTelemetry for distributed tracing and metrics collection (good to have).
                                  • Maintain dashboards and alerts for system health and performance monitoring.
                                  DevOps & CI/CD:
                                  • Contribute to CI/CD pipeline development using GitHub Actions.
                                  • Collaborate with DevOps teams to ensure seamless deployment and integration of data services.
                                  Collaboration & Documentation:
                                  • Work closely with cross-functional teams including data scientists, platform engineers, and product managers.
                                  • Document system architecture, data flows, and operational procedures.
                                  • Participate in code reviews, knowledge sharing sessions, and technical mentoring.
                                  Required Skills & Qualifications:
                                  • Bachelor’s or Master’s degree in Computer Science, Information Technology, or related field.
                                  • 5+ years of experience in Big Data engineering and scripting.
                                  • Strong hands-on experience with:
                                    • Kafka, NiFi, Hadoop, HDFS, Spark, Sqoop
                                    • ElasticSearch, Druid, Kibana, Grafana
                                    • Python3, Shell scripting
                                    • Logstash, Fluentd, Telegraf
                                  • Familiarity with GitHub Actions and basic DevOps practices.
                                  • Exposure to OpenTelemetry is a plus.
                                  • Excellent problem-solving, analytical, and communication skills.
                                  Preferred Qualifications:
                                  • Experience in building real-time data streaming applications.
                                  • Knowledge of data governance, security, and compliance in Big Data environments.
                                  • Certifications in Big Data technologies or cloud platforms (AWS/GCP/Azure) are a plus.

                                  Why join us

                                  We are committed to support accelerating our Group’s ESG strategy by implementing ESG principles in all our activities and policies. They are translated in our business activity (ESG assessment, reporting, project management or IT activities), our work environment and in our responsible practices for environment protection.

                                  Business insight

                                  At Societe Generale, we are convinced that people are drivers of change, and that the world of tomorrow will be shaped by all their initiatives, from the smallest to the most ambitious.

                                  Whether you’re joining us for a period of months, years or your entire career, together we can have a positive impact on the future. Creating, daring, innovating and taking action are part of our DNA.

                                  If you too want to be directly involved, grow in a stimulating and caring environment, feel useful on a daily basis and develop or strengthen your expertise, you will feel right at home with us!

                                  Still hesitating?

                                  You should know that our employees can dedicate several days per year to solidarity actions during their working hours, including sponsoring people struggling with their orientation or professional integration, participating in the financial education of young apprentices, and sharing their skills with charities. There are many ways to get involved.

                                  Diversity and Inclusion

                                  We are an equal opportunities employer and we are proud to make diversity a strength for our company. Societe Generale is committed to recognizing and promoting all talents, regardless of their beliefs, age, disability, parental status, ethnic origin, nationality, gender identity, sexual orientation, membership of a political, religious, trade union or minority organisation, or any other characteristic that could be subject to discrimination.
                                  Share
                                  Specialist Software Engineer - Python, Elastic Search., Druid
                                  Permanent contract
                                  Bangalore, India
                                  Hybrid
                                  Apply
                                  Add to favorites

                                  Titre
                                  Similar jobs

                                  Lead Software Engineer - Python, Elastic

                                  Permanent contract
                                  Bangalore, India

                                  Titre
                                  Jobs & contracts

                                  { "@context": "https://schema.org", "@type": "BreadcrumbList", "itemListElement": [ { "@type": "ListItem", "position": 1, "name": "Home", "item": "https://careers.societegenerale.com/en" } , { "@type": "ListItem", "position": 2, "name": "Job offers", "item": "https://careers.societegenerale.comhttps://careers.societegenerale.com/en/search" } , { "@type": "ListItem", "position": 3, "name": "Specialist Software Engineer - Python, Elastic Search., Druid", } ] }
                                  • Home
                                  • Job offers
                                  • Specialist Software Engineer - Python, Elastic Search., Druid

Specialist Software Engineer - Python, Elastic Search., Druid

at Societe Generale

Back to all Data Engineering jobs
Societe Generale logo
Investment Banking

Specialist Software Engineer - Python, Elastic Search., Druid

at Societe Generale

Mid LevelNo visa sponsorshipData Engineering

Posted 2 days ago

No clicks

Compensation
Not specified

Currency: Not specified

City
Bengaluru
Country
India

We are seeking a highly skilled Specialist Software Engineer with deep expertise in Big Data technologies, data pipeline orchestration, and observability tooling. The role is responsible for designing, developing, and maintaining scalable data processing systems and integrating observability solutions to ensure system reliability and performance. The candidate will design robust data pipelines using Apache Kafka, Apache NiFi, Apache Spark, Hadoop/HDFS, Druid, and ElasticSearch, and will integrate data visualization and monitoring tools such as Kibana, Grafana, and Logstash. Strong Python 3 and Shell scripting skills, plus GitHub Actions and collaboration with cross-functional teams, are essential for success.

{"hiringOrganization":{"logo":"https:\/\/careers.societegenerale.com\/themes\/custom\/sg_careers\/images\/LOGO_Groupe_EN.jpg","@type":"Organization","name":"Societe Generale","sameAs":"https:\/\/careers.societegenerale.com"},"employmentType":"Permanent contract","validThrough":"2025\/12\/29","datePosted":"2025\/12\/21","title":"Specialist Software Engineer - Python, Elastic Search., Druid - IT (Information Technology) - Bangalore, India","@context":"http:\/\/schema.org\/","@type":"JobPosting","description":"ResponsibilitiesJ<\/strong>ob Summary:<\/strong>

We are seeking a highly skilled and motivated Specialist Software Engineer<\/strong> with deep expertise in Big Data technologies<\/strong>, data pipeline orchestration<\/strong>, and observability tooling<\/strong>. The ideal candidate will be responsible for designing, developing, and maintaining scalable data processing systems and integrating observability solutions to ensure system reliability and performance.<\/p>Key Responsibilities:<\/strong>Big Data Engineering:<\/strong>

  • Design and implement robust data pipelines using Apache Kafka<\/strong>, Apache NiFi<\/strong>, Apache Spark<\/strong>, and Sqoop<\/strong>.<\/li>
  • Manage and optimize distributed data storage systems including Hadoop<\/strong>, HDFS<\/strong>, Druid<\/strong>, and ElasticSearch<\/strong>.<\/li>
  • Integrate and maintain data visualization and monitoring tools like Kibana<\/strong>, Grafana<\/strong>, and Logstash<\/strong>.<\/li>
  • Ensure efficient data ingestion, transformation, and delivery across various platforms.<\/li><\/ul>Programming & Scripting:<\/strong>
    • Develop automation scripts and data processing utilities using Python 3<\/strong> and Shell scripting<\/strong>.<\/li>
    • Build reusable components and libraries for data manipulation and system integration.<\/li><\/ul>Observability & Monitoring:<\/strong>
      • Implement and configure observability agents such as Fluentd<\/strong>, Telegraf<\/strong>, and Logstash<\/strong>.<\/li>
      • Collaborate with platform teams to integrate OpenTelemetry<\/strong> for distributed tracing and metrics collection (good to have).<\/li>
      • Maintain dashboards and alerts for system health and performance monitoring.<\/li><\/ul>DevOps & CI\/CD:<\/strong>
        • Contribute to CI\/CD pipeline development using GitHub Actions<\/strong>.<\/li>
        • Collaborate with DevOps teams to ensure seamless deployment and integration of data services.<\/li><\/ul>Collaboration & Documentation:<\/strong>
          • Work closely with cross-functional teams including data scientists, platform engineers, and product managers.<\/li>
          • Document system architecture, data flows, and operational procedures.<\/li>
          • Participate in code reviews, knowledge sharing sessions, and technical mentoring.<\/li><\/ul>Required Skills & Qualifications:<\/strong>
            • Bachelor\u2019s or Master\u2019s degree in Computer Science, Information Technology, or related field.<\/li>
            • 5+ years of experience in Big Data engineering and scripting.<\/li>
            • Strong hands-on experience with:
              • Kafka, NiFi, Hadoop, HDFS, Spark, Sqoop<\/strong><\/li>
              • ElasticSearch, Druid, Kibana, Grafana<\/strong><\/li>
              • Python3, Shell scripting<\/strong><\/li>
              • Logstash, Fluentd, Telegraf<\/strong><\/li><\/ul><\/li>
              • Familiarity with GitHub Actions<\/strong> and basic DevOps practices.<\/li>
              • Exposure to OpenTelemetry<\/strong> is a plus.<\/li>
              • Excellent problem-solving, analytical, and communication skills.<\/li><\/ul>Preferred Qualifications:<\/strong>
                • Experience in building real-time data streaming applications.<\/li>
                • Knowledge of data governance, security, and compliance in Big Data environments.<\/li>
                • Certifications in Big Data technologies or cloud platforms (AWS\/GCP\/Azure) are a plus.<\/li><\/ul>Profile requiredJ<\/strong>ob Summary:<\/strong>

                  We are seeking a highly skilled and motivated Specialist Software Engineer<\/strong> with deep expertise in Big Data technologies<\/strong>, data pipeline orchestration<\/strong>, and observability tooling<\/strong>. The ideal candidate will be responsible for designing, developing, and maintaining scalable data processing systems and integrating observability solutions to ensure system reliability and performance.<\/p>Key Responsibilities:<\/strong>Big Data Engineering:<\/strong>

                  • Design and implement robust data pipelines using Apache Kafka<\/strong>, Apache NiFi<\/strong>, Apache Spark<\/strong>, and Sqoop<\/strong>.<\/li>
                  • Manage and optimize distributed data storage systems including Hadoop<\/strong>, HDFS<\/strong>, Druid<\/strong>, and ElasticSearch<\/strong>.<\/li>
                  • Integrate and maintain data visualization and monitoring tools like Kibana<\/strong>, Grafana<\/strong>, and Logstash<\/strong>.<\/li>
                  • Ensure efficient data ingestion, transformation, and delivery across various platforms.<\/li><\/ul>Programming & Scripting:<\/strong>
                    • Develop automation scripts and data processing utilities using Python 3<\/strong> and Shell scripting<\/strong>.<\/li>
                    • Build reusable components and libraries for data manipulation and system integration.<\/li><\/ul>Observability & Monitoring:<\/strong>
                      • Implement and configure observability agents such as Fluentd<\/strong>, Telegraf<\/strong>, and Logstash<\/strong>.<\/li>
                      • Collaborate with platform teams to integrate OpenTelemetry<\/strong> for distributed tracing and metrics collection (good to have).<\/li>
                      • Maintain dashboards and alerts for system health and performance monitoring.<\/li><\/ul>DevOps & CI\/CD:<\/strong>
                        • Contribute to CI\/CD pipeline development using GitHub Actions<\/strong>.<\/li>
                        • Collaborate with DevOps teams to ensure seamless deployment and integration of data services.<\/li><\/ul>Collaboration & Documentation:<\/strong>
                          • Work closely with cross-functional teams including data scientists, platform engineers, and product managers.<\/li>
                          • Document system architecture, data flows, and operational procedures.<\/li>
                          • Participate in code reviews, knowledge sharing sessions, and technical mentoring.<\/li><\/ul>Required Skills & Qualifications:<\/strong>
                            • Bachelor\u2019s or Master\u2019s degree in Computer Science, Information Technology, or related field.<\/li>
                            • 5+ years of experience in Big Data engineering and scripting.<\/li>
                            • Strong hands-on experience with:
                              • Kafka, NiFi, Hadoop, HDFS, Spark, Sqoop<\/strong><\/li>
                              • ElasticSearch, Druid, Kibana, Grafana<\/strong><\/li>
                              • Python3, Shell scripting<\/strong><\/li>
                              • Logstash, Fluentd, Telegraf<\/strong><\/li><\/ul><\/li>
                              • Familiarity with GitHub Actions<\/strong> and basic DevOps practices.<\/li>
                              • Exposure to OpenTelemetry<\/strong> is a plus.<\/li>
                              • Excellent problem-solving, analytical, and communication skills.<\/li><\/ul>Preferred Qualifications:<\/strong>
                                • Experience in building real-time data streaming applications.<\/li>
                                • Knowledge of data governance, security, and compliance in Big Data environments.<\/li>
                                • Certifications in Big Data technologies or cloud platforms (AWS\/GCP\/Azure) are a plus.<\/li><\/ul>Why join us

                                  We are committed to support accelerating our Group\u2019s ESG strategy by implementing ESG principles in all our activities and policies. They are translated in our business activity (ESG assessment, reporting, project management or IT activities), our work environment and in our responsible practices for environment protection.<\/strong><\/p>Business insight

                                  At Societe Generale, we are convinced that people are drivers of change, and that the world of tomorrow will be shaped by all their initiatives, from the smallest to the most ambitious.<\/p>

                                  Whether you\u2019re joining us for a period of months, years or your entire career, together we can have a positive impact on the future. Creating, daring, innovating and taking action are part of our DNA.<\/p>

                                  If you too want to be directly involved, grow in a stimulating and caring environment, feel useful on a daily basis and develop or strengthen your expertise, you will feel right at home with us!<\/p>

                                  Still hesitating?<\/p>

                                  You should know that our employees can dedicate several days per year to solidarity actions during their working hours, including sponsoring people struggling with their orientation or professional integration, participating in the financial education of young apprentices, and sharing their skills with charities. There are many ways to get involved.<\/p>","identifier":{"@type":"PropertyValue","name":"Recruitment Societe Generale","value":"25000M6Z"},"jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Bangalore","addressCountry":"India"}}} window.dataLayer = window.dataLayer || []; var aData = { customVarPage1: "Specialist Software Engineer - Python, Elastic Search., Druid", customVarPage2: "Bangalore", customVarPage3: "Permanent contract", customVarPage4: "25000M6Z", customVarPage5: "SG Global Solution Centre", customVarPage6: "IT (Information Technology)", customVarPage7: "2025/12/21" } window.dataLayer.push(aData);

                                  Back to offers

                                  Specialist Software Engineer - Python, Elastic Search., Druid

                                  IT (Information Technology)
                                  Apply
                                  Add to favorites
                                  Permanent contract
                                  Bangalore, India
                                  Hybrid
                                  Reference 25000M6Z
                                  Start date Immediately
                                  Publication date 2025/12/21

                                  Responsibilities

                                  Job Summary:

                                  We are seeking a highly skilled and motivated Specialist Software Engineer with deep expertise in Big Data technologies, data pipeline orchestration, and observability tooling. The ideal candidate will be responsible for designing, developing, and maintaining scalable data processing systems and integrating observability solutions to ensure system reliability and performance.

                                  Key Responsibilities:Big Data Engineering:
                                  • Design and implement robust data pipelines using Apache Kafka, Apache NiFi, Apache Spark, and Sqoop.
                                  • Manage and optimize distributed data storage systems including Hadoop, HDFS, Druid, and ElasticSearch.
                                  • Integrate and maintain data visualization and monitoring tools like Kibana, Grafana, and Logstash.
                                  • Ensure efficient data ingestion, transformation, and delivery across various platforms.
                                  Programming & Scripting:
                                  • Develop automation scripts and data processing utilities using Python 3 and Shell scripting.
                                  • Build reusable components and libraries for data manipulation and system integration.
                                  Observability & Monitoring:
                                  • Implement and configure observability agents such as Fluentd, Telegraf, and Logstash.
                                  • Collaborate with platform teams to integrate OpenTelemetry for distributed tracing and metrics collection (good to have).
                                  • Maintain dashboards and alerts for system health and performance monitoring.
                                  DevOps & CI/CD:
                                  • Contribute to CI/CD pipeline development using GitHub Actions.
                                  • Collaborate with DevOps teams to ensure seamless deployment and integration of data services.
                                  Collaboration & Documentation:
                                  • Work closely with cross-functional teams including data scientists, platform engineers, and product managers.
                                  • Document system architecture, data flows, and operational procedures.
                                  • Participate in code reviews, knowledge sharing sessions, and technical mentoring.
                                  Required Skills & Qualifications:
                                  • Bachelor’s or Master’s degree in Computer Science, Information Technology, or related field.
                                  • 5+ years of experience in Big Data engineering and scripting.
                                  • Strong hands-on experience with:
                                    • Kafka, NiFi, Hadoop, HDFS, Spark, Sqoop
                                    • ElasticSearch, Druid, Kibana, Grafana
                                    • Python3, Shell scripting
                                    • Logstash, Fluentd, Telegraf
                                  • Familiarity with GitHub Actions and basic DevOps practices.
                                  • Exposure to OpenTelemetry is a plus.
                                  • Excellent problem-solving, analytical, and communication skills.
                                  Preferred Qualifications:
                                  • Experience in building real-time data streaming applications.
                                  • Knowledge of data governance, security, and compliance in Big Data environments.
                                  • Certifications in Big Data technologies or cloud platforms (AWS/GCP/Azure) are a plus.

                                  Profile required

                                  Job Summary:

                                  We are seeking a highly skilled and motivated Specialist Software Engineer with deep expertise in Big Data technologies, data pipeline orchestration, and observability tooling. The ideal candidate will be responsible for designing, developing, and maintaining scalable data processing systems and integrating observability solutions to ensure system reliability and performance.

                                  Key Responsibilities:Big Data Engineering:
                                  • Design and implement robust data pipelines using Apache Kafka, Apache NiFi, Apache Spark, and Sqoop.
                                  • Manage and optimize distributed data storage systems including Hadoop, HDFS, Druid, and ElasticSearch.
                                  • Integrate and maintain data visualization and monitoring tools like Kibana, Grafana, and Logstash.
                                  • Ensure efficient data ingestion, transformation, and delivery across various platforms.
                                  Programming & Scripting:
                                  • Develop automation scripts and data processing utilities using Python 3 and Shell scripting.
                                  • Build reusable components and libraries for data manipulation and system integration.
                                  Observability & Monitoring:
                                  • Implement and configure observability agents such as Fluentd, Telegraf, and Logstash.
                                  • Collaborate with platform teams to integrate OpenTelemetry for distributed tracing and metrics collection (good to have).
                                  • Maintain dashboards and alerts for system health and performance monitoring.
                                  DevOps & CI/CD:
                                  • Contribute to CI/CD pipeline development using GitHub Actions.
                                  • Collaborate with DevOps teams to ensure seamless deployment and integration of data services.
                                  Collaboration & Documentation:
                                  • Work closely with cross-functional teams including data scientists, platform engineers, and product managers.
                                  • Document system architecture, data flows, and operational procedures.
                                  • Participate in code reviews, knowledge sharing sessions, and technical mentoring.
                                  Required Skills & Qualifications:
                                  • Bachelor’s or Master’s degree in Computer Science, Information Technology, or related field.
                                  • 5+ years of experience in Big Data engineering and scripting.
                                  • Strong hands-on experience with:
                                    • Kafka, NiFi, Hadoop, HDFS, Spark, Sqoop
                                    • ElasticSearch, Druid, Kibana, Grafana
                                    • Python3, Shell scripting
                                    • Logstash, Fluentd, Telegraf
                                  • Familiarity with GitHub Actions and basic DevOps practices.
                                  • Exposure to OpenTelemetry is a plus.
                                  • Excellent problem-solving, analytical, and communication skills.
                                  Preferred Qualifications:
                                  • Experience in building real-time data streaming applications.
                                  • Knowledge of data governance, security, and compliance in Big Data environments.
                                  • Certifications in Big Data technologies or cloud platforms (AWS/GCP/Azure) are a plus.

                                  Why join us

                                  We are committed to support accelerating our Group’s ESG strategy by implementing ESG principles in all our activities and policies. They are translated in our business activity (ESG assessment, reporting, project management or IT activities), our work environment and in our responsible practices for environment protection.

                                  Business insight

                                  At Societe Generale, we are convinced that people are drivers of change, and that the world of tomorrow will be shaped by all their initiatives, from the smallest to the most ambitious.

                                  Whether you’re joining us for a period of months, years or your entire career, together we can have a positive impact on the future. Creating, daring, innovating and taking action are part of our DNA.

                                  If you too want to be directly involved, grow in a stimulating and caring environment, feel useful on a daily basis and develop or strengthen your expertise, you will feel right at home with us!

                                  Still hesitating?

                                  You should know that our employees can dedicate several days per year to solidarity actions during their working hours, including sponsoring people struggling with their orientation or professional integration, participating in the financial education of young apprentices, and sharing their skills with charities. There are many ways to get involved.

                                  Diversity and Inclusion

                                  We are an equal opportunities employer and we are proud to make diversity a strength for our company. Societe Generale is committed to recognizing and promoting all talents, regardless of their beliefs, age, disability, parental status, ethnic origin, nationality, gender identity, sexual orientation, membership of a political, religious, trade union or minority organisation, or any other characteristic that could be subject to discrimination.
                                  Share
                                  Specialist Software Engineer - Python, Elastic Search., Druid
                                  Permanent contract
                                  Bangalore, India
                                  Hybrid
                                  Apply
                                  Add to favorites

                                  Titre
                                  Similar jobs

                                  Lead Software Engineer - Python, Elastic

                                  Permanent contract
                                  Bangalore, India

                                  Titre
                                  Jobs & contracts

                                  { "@context": "https://schema.org", "@type": "BreadcrumbList", "itemListElement": [ { "@type": "ListItem", "position": 1, "name": "Home", "item": "https://careers.societegenerale.com/en" } , { "@type": "ListItem", "position": 2, "name": "Job offers", "item": "https://careers.societegenerale.comhttps://careers.societegenerale.com/en/search" } , { "@type": "ListItem", "position": 3, "name": "Specialist Software Engineer - Python, Elastic Search., Druid", } ] }
                                  • Home
                                  • Job offers
                                  • Specialist Software Engineer - Python, Elastic Search., Druid