LOG IN
SIGN UP
Tech Job Finder - Find Software, Technology Sales and Product Manager Jobs.
Sign In
OR continue with e-mail and password
E-mail address
Password
Don't have an account?
Reset password
Join Tech Job Finder
OR continue with e-mail and password
E-mail address
First name
Last name
Username
Password
Confirm Password
How did you hear about us?
By signing up, you agree to our Terms & Conditions and Privacy Policy.

Data Engineer, WW Ops Finance S&A

at Amazon

Back to all Data Engineering jobs
A
Industry not specified

Data Engineer, WW Ops Finance S&A

at Amazon

Mid LevelNo visa sponsorshipData Engineering

Posted 11 hours ago

No clicks

Compensation
Not specified

Currency: Not specified

City
Not specified
Country
Not specified

Design, build and maintain complex data solutions for Amazon's Operations Finance businesses. Develop and maintain fully automated ETL pipelines using Python, Spark, SQL and AWS services such as S3, Glue, and Lambda. Participate in code reviews, design discussions, and drive scalable, maintainable data architectures with a focus on data quality for internal Finance customers. This role sits in the FP&A Product organization and requires strong SQL, data modeling, and experience with big-data technologies like Redshift, EMR, and Spark.

Are you passionate about standardizing data platforms and automating data engineering to drive analytics and reporting? Do you excel in dynamic, fast-paced environments and find joy in converting data into actionable insights? If you thrive in innovation and can deliver scalable Data Engineering Solutions, then the Worldwide Operations Finance Standardization & Automation (SnA) team has an exciting opportunity for you!

We are looking for a top notch Data Engineer to be part of our Financial Planning & Analytics (FP&A) Product organization.

• Strong experience in Data Warehouse and Business Intelligence application development
• Data Analysis: Understand business processes, logical data models and relational database implementations
• Expert knowledge in SQL. Optimize complex queries.
• Basic understanding of statistical analysis. Experience in testing design and measurement.
• Proven track record of working on complex modular projects, and assuming a leading role in such projects
• Highly motivated, self-driven, capable of defining own design and test scenarios
• Experience with programming/scripting languages such as Scala/Python etc. preferred
• Evaluate and implement various big-data technologies and solutions (Redshift DW, Glue, EMR, Spark) to optimize processing of extremely large datasets in an accurate and timely fashion.

Key job responsibilities
- Designing, build and maintain complex data solutions for Amazon's Operations Finance businesses
- Develop and maintain fully automated ETL pipelines using scripting languages such as Python, Spark, SQL and AWS services such as S3, Glue, Lambda
- Actively participates in the code review process, design discussions, team planning, operational excellence, and constructively identifies problems and proposes solutions
- Makes appropriate trade-offs, re-use where possible, and is judicious about introducing dependencies
- Asks correct questions when data model and requirements are not well defined and comes up with designs which are scalable, maintainable and efficient
- Implement and support reporting and analytics infrastructure for internal Finance customers.
- Makes enhancements that improve team’s data architecture, making it better and easier to maintain - - Owns the data quality of datasets and any new changes/enhancements

About the team
This is a core Data Engineering team that sits within the Financial Planning & Analytics Product organization owning the data infrastructure, datasets supporting Worldwide Ops Finance business. The team is responsible for scaling and sustaining data solutions that support Financial Planning & Analytics products across all Amazon businesses.

Basic Qualifications

- 3+ years of data engineering experience
- Experience with data modeling, warehousing and building ETL pipelines
- Knowledge of distributed systems as it pertains to data storage and computing
- Experience in at least one modern scripting or programming language, such as Python, Java, Scala, or NodeJS

Preferred Qualifications

- Experience with AWS technologies like Redshift, S3, AWS Glue, EMR, Kinesis, FireHose, Lambda, and IAM roles and permissions
- Experience with non-relational databases / data stores (object storage, document or key-value stores, graph databases, column-family databases)

Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner.

Data Engineer, WW Ops Finance S&A

at Amazon

Back to all Data Engineering jobs
A
Industry not specified

Data Engineer, WW Ops Finance S&A

at Amazon

Mid LevelNo visa sponsorshipData Engineering

Posted 11 hours ago

No clicks

Compensation
Not specified

Currency: Not specified

City
Not specified
Country
Not specified

Design, build and maintain complex data solutions for Amazon's Operations Finance businesses. Develop and maintain fully automated ETL pipelines using Python, Spark, SQL and AWS services such as S3, Glue, and Lambda. Participate in code reviews, design discussions, and drive scalable, maintainable data architectures with a focus on data quality for internal Finance customers. This role sits in the FP&A Product organization and requires strong SQL, data modeling, and experience with big-data technologies like Redshift, EMR, and Spark.

Are you passionate about standardizing data platforms and automating data engineering to drive analytics and reporting? Do you excel in dynamic, fast-paced environments and find joy in converting data into actionable insights? If you thrive in innovation and can deliver scalable Data Engineering Solutions, then the Worldwide Operations Finance Standardization & Automation (SnA) team has an exciting opportunity for you!

We are looking for a top notch Data Engineer to be part of our Financial Planning & Analytics (FP&A) Product organization.

• Strong experience in Data Warehouse and Business Intelligence application development
• Data Analysis: Understand business processes, logical data models and relational database implementations
• Expert knowledge in SQL. Optimize complex queries.
• Basic understanding of statistical analysis. Experience in testing design and measurement.
• Proven track record of working on complex modular projects, and assuming a leading role in such projects
• Highly motivated, self-driven, capable of defining own design and test scenarios
• Experience with programming/scripting languages such as Scala/Python etc. preferred
• Evaluate and implement various big-data technologies and solutions (Redshift DW, Glue, EMR, Spark) to optimize processing of extremely large datasets in an accurate and timely fashion.

Key job responsibilities
- Designing, build and maintain complex data solutions for Amazon's Operations Finance businesses
- Develop and maintain fully automated ETL pipelines using scripting languages such as Python, Spark, SQL and AWS services such as S3, Glue, Lambda
- Actively participates in the code review process, design discussions, team planning, operational excellence, and constructively identifies problems and proposes solutions
- Makes appropriate trade-offs, re-use where possible, and is judicious about introducing dependencies
- Asks correct questions when data model and requirements are not well defined and comes up with designs which are scalable, maintainable and efficient
- Implement and support reporting and analytics infrastructure for internal Finance customers.
- Makes enhancements that improve team’s data architecture, making it better and easier to maintain - - Owns the data quality of datasets and any new changes/enhancements

About the team
This is a core Data Engineering team that sits within the Financial Planning & Analytics Product organization owning the data infrastructure, datasets supporting Worldwide Ops Finance business. The team is responsible for scaling and sustaining data solutions that support Financial Planning & Analytics products across all Amazon businesses.

Basic Qualifications

- 3+ years of data engineering experience
- Experience with data modeling, warehousing and building ETL pipelines
- Knowledge of distributed systems as it pertains to data storage and computing
- Experience in at least one modern scripting or programming language, such as Python, Java, Scala, or NodeJS

Preferred Qualifications

- Experience with AWS technologies like Redshift, S3, AWS Glue, EMR, Kinesis, FireHose, Lambda, and IAM roles and permissions
- Experience with non-relational databases / data stores (object storage, document or key-value stores, graph databases, column-family databases)

Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner.

SIMILAR OPPORTUNITIES

No similar jobs available at the moment.