LOG IN
SIGN UP
Tech Job Finder - Find Software, Technology Sales and Product Manager Jobs.
Sign In
OR continue with e-mail and password
E-mail address
Password
Don't have an account?
Reset password
Join Tech Job Finder
OR continue with e-mail and password
E-mail address
First name
Last name
Username
Password
Confirm Password
How did you hear about us?
By signing up, you agree to our Terms & Conditions and Privacy Policy.

Data Platform Engineer – Commodities Technology

at Millennium

Back to all Data Engineering jobs
Millennium logo
Hedge Funds

Data Platform Engineer – Commodities Technology

at Millennium

Mid LevelNo visa sponsorshipData Engineering

Posted a month ago

No clicks

Compensation
Not specified

Currency: Not specified

City
Singapore, Bengaluru
Country
United States, Singapore, India

Millennium's Commodities Technology team is hiring a Data Platform Engineer to design and build the next-generation data platform for commodities data ingestion, transformation, cataloging, and consumption. The role focuses on developing resilient, event-driven pipelines and cloud-native infrastructure using Python, SQL, Airflow, Kafka and AWS, as well as FastAPI-based services for data access. You will optimize for performance, reliability, observability and collaborate with distributed engineering and research teams across the US, Europe, Singapore and Bangalore to support both experimentation and production workloads.

Data Platform Engineer – Commodities Technology

About Us

Founded in 1989, Millennium is a global alternative investment management firm. Millennium seeks to pursue a diverse array of investment strategies across industry sectors, asset classes and geographies. The firm’s primary investment areas are Fundamental Equity, Equity Arbitrage, Fixed Income, Commodities and Quantitative Strategies. We solve hard and interesting problems at the intersection of computer science, finance, and mathematics. We are focused on innovating and rapidly applying innovations to real world scenarios. This enables engineers to work on interesting problems, learn quickly and have deep impact to the firm and the business.

Within Millennium, the Commodities Technology team builds the data and analytics platforms that power our commodities investment strategies. We aggregate and process large volumes of fundamental and alternative data – including weather, supply/demand indicators, storage and transportation data – to provide our Portfolio Managers with a differentiated information edge.

The Role

We are seeking a Data Platform Engineer to help build the next-generation data platform (CFP) for the Commodities business.

In this role, you will design and implement the core platform infrastructure, APIs, and event‑driven services that power ingestion, transformation, cataloging, and consumption of commodities data. You will work across Python, SQL and modern cloud services to build resilient pipelines, orchestration frameworks, and system management tools with a strong focus on reliability, observability, and performance.

You will work closely with engineering teams in the US, Europe, and Singapore as well as with our commodities modelling and research teams in Bangalore to deliver a scalable platform that can support rapid experimentation and production workloads.

Key Responsibilities

  • Platform Engineering: Design and build the foundational data platform components, including event handling, system management tools, and query‑optimized storage for large‑scale commodities datasets.

  • Data Pipelines & Orchestration: Implement and maintain robust batch and streaming pipelines using Python, SQL, Airflow, and Kafka to ingest and transform data from multiple internal and external sources.

  • Cloud Infrastructure: Develop and manage cloud‑native infrastructure on AWS (S3, SQS, RDS, Terraform), ensuring security, scalability, and cost efficiency.

  • API & Services Development: Build and maintain FastAPI‑based services and APIs for data access, metadata, and platform operations, enabling self‑service consumption by downstream users.

  • Performance & Reliability: Optimize queries, workflows, and resource usage to deliver low‑latency data access and high platform uptime; introduce monitoring, alerting, and automated testing (PyTest, CI/CD).

  • Collaboration & Best Practices: Partner with quantitative researchers, data scientists, and other engineers to understand requirements, translate them into platform capabilities, and promote best practices in code quality, DevOps, and documentation.

Required Qualifications

  • Experience: 4–8 years of software/data engineering experience, preferably building or operating data platforms or large‑scale data pipelines.

  • Programming: Strong proficiency in Python with solid software engineering practices (testing, code review, CI/CD).

  • Data & SQL: Hands‑on experience with SQL and relational databases (Snowflake, Postgres or similar); understanding of data modelling and query optimization.

  • Orchestration & Streaming: Practical experience with Airflow (or similar workflow orchestration tools) and message/streaming systems such as Kafka.

  • Cloud & Infrastructure as Code: Experience with AWS services (S3, SQS, RDS) and infrastructure‑as‑code tools such as Terraform.

  • APIs & Services: Experience building RESTful services, ideally with FastAPI or a similar Python web framework.

  • DevOps: Familiarity with Git‑based workflows and CI/CD tooling (e.g., GitHub Actions) and automated testing frameworks (PyTest).

  • Soft Skills: Strong communication skills, ability to work in a distributed team, and a pragmatic, ownership‑driven mindset.

Preferred Qualifications

  • Experience with columnar/analytic data formats and engines (e.g., Iceberg, ClickHouse, Parquet).

  • Exposure to monitoring/observability stacks (Prometheus, Grafana, OpenTelemetry, etc.).

  • Prior experience in financial markets or commodities data environments.

  • Experience working in high‑impact, globally distributed engineering teams.

Data Platform Engineer – Commodities Technology

at Millennium

Back to all Data Engineering jobs
Millennium logo
Hedge Funds

Data Platform Engineer – Commodities Technology

at Millennium

Mid LevelNo visa sponsorshipData Engineering

Posted a month ago

No clicks

Compensation
Not specified

Currency: Not specified

City
Singapore, Bengaluru
Country
United States, Singapore, India

Millennium's Commodities Technology team is hiring a Data Platform Engineer to design and build the next-generation data platform for commodities data ingestion, transformation, cataloging, and consumption. The role focuses on developing resilient, event-driven pipelines and cloud-native infrastructure using Python, SQL, Airflow, Kafka and AWS, as well as FastAPI-based services for data access. You will optimize for performance, reliability, observability and collaborate with distributed engineering and research teams across the US, Europe, Singapore and Bangalore to support both experimentation and production workloads.

Data Platform Engineer – Commodities Technology

About Us

Founded in 1989, Millennium is a global alternative investment management firm. Millennium seeks to pursue a diverse array of investment strategies across industry sectors, asset classes and geographies. The firm’s primary investment areas are Fundamental Equity, Equity Arbitrage, Fixed Income, Commodities and Quantitative Strategies. We solve hard and interesting problems at the intersection of computer science, finance, and mathematics. We are focused on innovating and rapidly applying innovations to real world scenarios. This enables engineers to work on interesting problems, learn quickly and have deep impact to the firm and the business.

Within Millennium, the Commodities Technology team builds the data and analytics platforms that power our commodities investment strategies. We aggregate and process large volumes of fundamental and alternative data – including weather, supply/demand indicators, storage and transportation data – to provide our Portfolio Managers with a differentiated information edge.

The Role

We are seeking a Data Platform Engineer to help build the next-generation data platform (CFP) for the Commodities business.

In this role, you will design and implement the core platform infrastructure, APIs, and event‑driven services that power ingestion, transformation, cataloging, and consumption of commodities data. You will work across Python, SQL and modern cloud services to build resilient pipelines, orchestration frameworks, and system management tools with a strong focus on reliability, observability, and performance.

You will work closely with engineering teams in the US, Europe, and Singapore as well as with our commodities modelling and research teams in Bangalore to deliver a scalable platform that can support rapid experimentation and production workloads.

Key Responsibilities

  • Platform Engineering: Design and build the foundational data platform components, including event handling, system management tools, and query‑optimized storage for large‑scale commodities datasets.

  • Data Pipelines & Orchestration: Implement and maintain robust batch and streaming pipelines using Python, SQL, Airflow, and Kafka to ingest and transform data from multiple internal and external sources.

  • Cloud Infrastructure: Develop and manage cloud‑native infrastructure on AWS (S3, SQS, RDS, Terraform), ensuring security, scalability, and cost efficiency.

  • API & Services Development: Build and maintain FastAPI‑based services and APIs for data access, metadata, and platform operations, enabling self‑service consumption by downstream users.

  • Performance & Reliability: Optimize queries, workflows, and resource usage to deliver low‑latency data access and high platform uptime; introduce monitoring, alerting, and automated testing (PyTest, CI/CD).

  • Collaboration & Best Practices: Partner with quantitative researchers, data scientists, and other engineers to understand requirements, translate them into platform capabilities, and promote best practices in code quality, DevOps, and documentation.

Required Qualifications

  • Experience: 4–8 years of software/data engineering experience, preferably building or operating data platforms or large‑scale data pipelines.

  • Programming: Strong proficiency in Python with solid software engineering practices (testing, code review, CI/CD).

  • Data & SQL: Hands‑on experience with SQL and relational databases (Snowflake, Postgres or similar); understanding of data modelling and query optimization.

  • Orchestration & Streaming: Practical experience with Airflow (or similar workflow orchestration tools) and message/streaming systems such as Kafka.

  • Cloud & Infrastructure as Code: Experience with AWS services (S3, SQS, RDS) and infrastructure‑as‑code tools such as Terraform.

  • APIs & Services: Experience building RESTful services, ideally with FastAPI or a similar Python web framework.

  • DevOps: Familiarity with Git‑based workflows and CI/CD tooling (e.g., GitHub Actions) and automated testing frameworks (PyTest).

  • Soft Skills: Strong communication skills, ability to work in a distributed team, and a pragmatic, ownership‑driven mindset.

Preferred Qualifications

  • Experience with columnar/analytic data formats and engines (e.g., Iceberg, ClickHouse, Parquet).

  • Exposure to monitoring/observability stacks (Prometheus, Grafana, OpenTelemetry, etc.).

  • Prior experience in financial markets or commodities data environments.

  • Experience working in high‑impact, globally distributed engineering teams.