LOG IN
SIGN UP
Tech Job Finder - Find Software, Technology Sales and Product Manager Jobs.
Sign In
OR continue with e-mail and password
E-mail address
Password
Don't have an account?
Reset password
Join Tech Job Finder
OR continue with e-mail and password
E-mail address
First name
Last name
Username
Password
Confirm Password
How did you hear about us?
By signing up, you agree to our Terms & Conditions and Privacy Policy.

Machine Learning Engineer - Content Safety Platform (AU remote)

at Canva Pty Ltd

Back to all Data Science / AI / ML jobs
C
Industry not specified

Machine Learning Engineer - Content Safety Platform (AU remote)

at Canva Pty Ltd

Mid LevelNo visa sponsorshipData Science/AI/ML

Posted 5 hours ago

No clicks

Compensation
Not specified AUD

Currency: $ (AUD)

City
Sydney, Melbourne, Perth
Country
Australia, New Zealand

Join Canva's Content Safety Platform to build the foundational safety infrastructure powering AI-generated content features. You will own end-to-end ML initiatives, build reusable safety models, and design scalable evaluation frameworks across multiple modalities. The role requires collaboration with Legal and Product Policy to deliver compliant, user-safe solutions, and involves working with product teams to balance safety with user experience. This is a remote-friendly position within Australia/New Zealand, contributing to Canva's AI safety at scale.

Join the team redefining how the world experiences design.

Hey, g'day, mabuhay, kia ora,你好, hallo, vítejte!

Thanks for stopping by. We know job hunting can be a little time consuming and you're probably keen to find out what's on offer, so we'll get straight to the point. 

Where and how you can work

Our flagship campus is in Sydney, with a second campus in Melbourne and co-working spaces in Brisbane, Perth, Adelaide, and Auckland, NZ. You have flexibility in how and where you work — whether that's from one of our spaces, from home, or a mix of both. This role is remote-friendly within Australia/New Zealand, so you can choose the setup that empowers you and your team to do your best work.

About the Group


The Trust & Safety (T&S) Group vision is to empower everyone to feel safe in trusting Canva. To safeguard our community, our T&S engineering teams build technologies to protect user safety (including, but not limited to their account, content, data, and privacy) and to prevent, detect, and mitigate abuse and fraud that could compromise the trust people have in Canva, such as unacceptable content, bots, account takeovers, and other abuse vectors.

Within T&S, the Content Safety Platform team specializes in building safety systems for AI-generated content. As Canva rapidly expands its AI capabilities, our team ensures these powerful creative tools remain safe, trustworthy, and compliant. We develop sophisticated ML-based moderation systems, bias mitigation solutions, IP detection frameworks, and responsible AI safeguards that operate at scale. This team sits at the intersection of cutting-edge AI innovation and critical safety engineering.

About the Role


You'll build the foundational safety infrastructure that powers trust across all of Canva's AI features. We're a platform team—our mission is to provide product teams with the tools, models, and systems they need to safely launch AI-generated content features at scale. Whether it's Magic Media, conversational AI, or future capabilities, product teams rely on our platform to detect harmful content, prevent IP violations, mitigate bias, and ensure compliance across multiple modalities of content.

In this role, you'll own significant ML initiatives that directly enable other teams to move faster while staying safe. You'll build reusable safety models, create scalable evaluation frameworks, and develop infrastructure that serves multiple products. This is a high-impact position where your work becomes the safety foundation for Canva's AI innovation—balancing cutting-edge ML techniques with the operational rigor required to protect millions of users. You'll collaborate closely with AI product teams, Legal, and Product Policy to deliver solutions that meet both product and compliance needs.

What you’ll do (responsibilities):

  • Own end-to-end delivery of ML-based safety features, from technical design through production rollout and iteration

  • Build and maintain ML models that safeguard AI-generated content across multiple modalities (images, video, audio, text), detecting harmful content, IP violations, bias, and other safety concerns

  • Design and implement RAG (Retrieval-Augmented Generation) architectures and other advanced ML systems to enhance detection capabilities

  • Fine-tune and evaluate LLM-based models for content moderation and prompt filtering, making data-driven decisions about model selection and optimization

  • Collaborate with Legal, Product Policy, and AI product teams to define requirements, balance safety with user experience, and deliver compliant solutions

  • Create evaluation frameworks to measure model quality, safety coverage, false positive/negative rates, and policy alignment

  • Monitor production systems, respond to incidents, and maintain operational excellence through documentation and runbooks

What we're looking for:


You're a machine learning engineer with a proven track record of delivering ML-powered features in production. You bring technical expertise across the ML lifecycle—from data wrangling and model development to evaluation, deployment, and monitoring. You're comfortable operating independently while collaborating with cross-functional teams, and you're motivated by user impact and product outcomes.

  • Strong bias for action and product-minded approach to engineering

  • Hands-on engineer who loves working alongside software engineers, writing Python production code (Java/Kotlin backend experience is a plus), and solving complex problems

  • Experience building and deploying ML systems using modern architectures, including LLMs and RAG

  • Comfortable influencing roadmap decisions and navigating ambiguous problem spaces

  • Passionate about the rapidly evolving AI landscape and proactive about experimenting with emerging techniques

Machine Learning Engineer - Content Safety Platform (AU remote)

at Canva Pty Ltd

Back to all Data Science / AI / ML jobs
C
Industry not specified

Machine Learning Engineer - Content Safety Platform (AU remote)

at Canva Pty Ltd

Mid LevelNo visa sponsorshipData Science/AI/ML

Posted 5 hours ago

No clicks

Compensation
Not specified AUD

Currency: $ (AUD)

City
Sydney, Melbourne, Perth
Country
Australia, New Zealand

Join Canva's Content Safety Platform to build the foundational safety infrastructure powering AI-generated content features. You will own end-to-end ML initiatives, build reusable safety models, and design scalable evaluation frameworks across multiple modalities. The role requires collaboration with Legal and Product Policy to deliver compliant, user-safe solutions, and involves working with product teams to balance safety with user experience. This is a remote-friendly position within Australia/New Zealand, contributing to Canva's AI safety at scale.

Join the team redefining how the world experiences design.

Hey, g'day, mabuhay, kia ora,你好, hallo, vítejte!

Thanks for stopping by. We know job hunting can be a little time consuming and you're probably keen to find out what's on offer, so we'll get straight to the point. 

Where and how you can work

Our flagship campus is in Sydney, with a second campus in Melbourne and co-working spaces in Brisbane, Perth, Adelaide, and Auckland, NZ. You have flexibility in how and where you work — whether that's from one of our spaces, from home, or a mix of both. This role is remote-friendly within Australia/New Zealand, so you can choose the setup that empowers you and your team to do your best work.

About the Group


The Trust & Safety (T&S) Group vision is to empower everyone to feel safe in trusting Canva. To safeguard our community, our T&S engineering teams build technologies to protect user safety (including, but not limited to their account, content, data, and privacy) and to prevent, detect, and mitigate abuse and fraud that could compromise the trust people have in Canva, such as unacceptable content, bots, account takeovers, and other abuse vectors.

Within T&S, the Content Safety Platform team specializes in building safety systems for AI-generated content. As Canva rapidly expands its AI capabilities, our team ensures these powerful creative tools remain safe, trustworthy, and compliant. We develop sophisticated ML-based moderation systems, bias mitigation solutions, IP detection frameworks, and responsible AI safeguards that operate at scale. This team sits at the intersection of cutting-edge AI innovation and critical safety engineering.

About the Role


You'll build the foundational safety infrastructure that powers trust across all of Canva's AI features. We're a platform team—our mission is to provide product teams with the tools, models, and systems they need to safely launch AI-generated content features at scale. Whether it's Magic Media, conversational AI, or future capabilities, product teams rely on our platform to detect harmful content, prevent IP violations, mitigate bias, and ensure compliance across multiple modalities of content.

In this role, you'll own significant ML initiatives that directly enable other teams to move faster while staying safe. You'll build reusable safety models, create scalable evaluation frameworks, and develop infrastructure that serves multiple products. This is a high-impact position where your work becomes the safety foundation for Canva's AI innovation—balancing cutting-edge ML techniques with the operational rigor required to protect millions of users. You'll collaborate closely with AI product teams, Legal, and Product Policy to deliver solutions that meet both product and compliance needs.

What you’ll do (responsibilities):

  • Own end-to-end delivery of ML-based safety features, from technical design through production rollout and iteration

  • Build and maintain ML models that safeguard AI-generated content across multiple modalities (images, video, audio, text), detecting harmful content, IP violations, bias, and other safety concerns

  • Design and implement RAG (Retrieval-Augmented Generation) architectures and other advanced ML systems to enhance detection capabilities

  • Fine-tune and evaluate LLM-based models for content moderation and prompt filtering, making data-driven decisions about model selection and optimization

  • Collaborate with Legal, Product Policy, and AI product teams to define requirements, balance safety with user experience, and deliver compliant solutions

  • Create evaluation frameworks to measure model quality, safety coverage, false positive/negative rates, and policy alignment

  • Monitor production systems, respond to incidents, and maintain operational excellence through documentation and runbooks

What we're looking for:


You're a machine learning engineer with a proven track record of delivering ML-powered features in production. You bring technical expertise across the ML lifecycle—from data wrangling and model development to evaluation, deployment, and monitoring. You're comfortable operating independently while collaborating with cross-functional teams, and you're motivated by user impact and product outcomes.

  • Strong bias for action and product-minded approach to engineering

  • Hands-on engineer who loves working alongside software engineers, writing Python production code (Java/Kotlin backend experience is a plus), and solving complex problems

  • Experience building and deploying ML systems using modern architectures, including LLMs and RAG

  • Comfortable influencing roadmap decisions and navigating ambiguous problem spaces

  • Passionate about the rapidly evolving AI landscape and proactive about experimenting with emerging techniques

SIMILAR OPPORTUNITIES

No similar jobs available at the moment.