Data Engineer III, Global Transportation Tech Services (GTTS), Global Transportation Tech Services (GTTS)
at Amazon
Posted 4 hours ago
No clicks
- Compensation
- Not specified USD
- City
- Luxembourg
- Country
- Luxembourg, India
Currency: $ (USD)
Build and operate data infrastructure for Amazon's transportation network by designing end-to-end data pipelines that process terabytes of data daily. Develop data models for analytics and ML use cases and deliver solutions to data consumers across the organization. Work with AWS data technologies (EMR, Glue, S3, Kinesis, Athena, Lake Formation, Redshift) and build Spark-based pipelines using Scala or Python, while driving data quality, reliability, and on-call support. Mentor DE-I/DE-II engineers and contribute to design and code reviews to continuously improve data systems.
Global Transportation Tech Services (GTTS) creates software that manages the foundation of Amazon's outbound transportation network. Your work has direct financial impact on network efficiency and directly affects whether customers get their packages on time.
As a Data Engineer, you'll build the data infrastructure that powers our transportation systems. You'll design and implement data pipelines that process terabytes of transportation data daily, build data models that enable analytics and machine learning, and create the data foundations that drive operational decisions across Amazon's global network.
Key job responsibilities
- Working independently to own end-to-end delivery (from design through release) of data pipelines and medium-sized data projects
- Designing and implementing data models that support analytics, reporting, and ML use cases
- Working with AWS data technologies such as EMR, Glue, S3, Kinesis, Athena, Lake Formation and Redshift
- Building data pipelines using Spark + Scala/Python that process TBs of data per day
- Implementing data quality frameworks and monitoring for pipeline reliability
- Working with data consumers (analysts, scientists, business teams) to understand requirements and deliver solutions
- Contributing to design reviews and code reviews for data systems
- Participating in operational support for our data products by joining a regular on-call rotation
- Driving data quality, reliability, and process improvements
- Mentoring DE-I and DE-II engineers and helping them grow technically
- Participating in regular hackathons to bring new ideas to GTTS
A day in the life
- You build and operate your data systems - we handle on-call for what we create
- Work spans multiple technologies: data pipelines, ETL/ELT, data warehousing, streaming, and integration with analytics and ML platforms
- Regular interaction with data consumers - analysts, scientists, and business stakeholders
- We value different perspectives and encourage open discussion
- Opportunity to work on data problems that affect transportation operations globally
- Team members work across different time zones
About the team
We're a global team with members in Hyderabad and Luxembourg. The team includes data engineers, software engineers, product managers, technical program managers and applied scientists working on transportation problems.
Basic Qualifications
- 5+ years of data engineering experience- Experience with data modeling, warehousing and building ETL pipelines
- Experience with SQL
- Experience in at least one modern scripting or programming language, such as Python, Java, Scala, or NodeJS
- Experience mentoring team members on best practices
Preferred Qualifications
- Knowledge of professional software engineering & best practices for full software development life cycle, including coding standards, software architectures, code reviews, source control management, continuous deployments, testing, and operational excellence- Experience providing technical leadership and mentoring other engineers for best practices on data engineering
- Experience with big data technologies such as: Hadoop, Hive, Spark, EMR
- Experience operating large data warehouses
- Experience in Kafka, or experience in any Bigdata architecture and experience with programming/scripting (Batch, VB, PowerShell, Java, C#, Chef, Perl, Ruby and/or PHP)
- Experience in database (eg. SQL, NoSQL, Hadoop, Spark, Kafka, Kinesis)
Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner.

