As a Trust and Safety AI Red Team Analyst, you will lead red-teaming efforts to simulate real-world adversarial scenarios and identify vulnerabilities in AI-driven features across Epic’s gaming ecosystem. You will develop, prototype, and teach novel red-teaming techniques and trust-and-safety methodologies to strengthen the team’s capabilities. You will analyze qualitative and quantitative data to quantify risk and provide evidence-based findings to protect player safety and uphold AI integrity. You will collaborate with AI researchers, product managers, and safety teams to remediate issues and ensure Epic policies and regulatory standards are met.
WHAT MAKES US EPIC? At the core of Epic’s success are talented, passionate people. Epic prides itself on creating a collaborative, welcoming, and creative environment. Whether it’s building award-winning games or crafting engine technology that enables others to make visually stunning interactive experiences, we’re always innovating. Being Epic means being a part of a team that continually strives to do right by our community and users. We’re constantly innovating to raise the bar of engine and game development. TRUST AND SAFETY What We Do The Epic Trust and Safety team provides a safer experience for Epic users. We work across multiple products and services to improve technology and craft transparent policies so our players and users can have positive experiences on our platforms. What You'll Do As a Trust and Safety AI Red Team Analyst at Epic Games, you will be instrumental in protecting our gaming ecosystem by identifying and mitigating trust and safety risks in AI-driven features. Your work will ensure that our games remain safe, inclusive, and enjoyable for players by proactively addressing potential abuses of our content rules and our community rules. The ideal candidate is a creative, mission-driven investigator passionate about safeguarding player experiences and upholding the integrity of AI systems in gaming. In this role, you will Take a leadership role in developing, prototyping, and teaching novel red teaming techniques and trust and safety methodologies to enhance team capabilities Investigate and understand how adversarial attacks, such as prompt injections, data poisoning, or bias exploitation, could manifest in Epic’s products Lead red teaming efforts to simulate real-world scenarios, identify vulnerabilities, and design forward-looking strategies to mitigate risks and ensure player safety Proactively look for and identify undetected vulnerabilities or misuse patterns in AI systems by leveraging diverse investigative resources and methodologies Analyze qualitative and quantitative data to identify trends, quantify risks, and provide clear, evidence-based findings to support trust and safety initiatives Collaborate with AI researchers, product managers, and trust and safety teams to mitigate identified risks and align AI systems with Epic policies and regulatory standards Provide launch support for quick fixes during AI-involved product launches What we're looking for 5+ years of experience conducting investigations or red teaming in fields such as cybersecurity, AI ethics, trust and safety, or related areas Proven ability to develop multi-source, evidence-based findings and communicate them effectively to technical and non-technical stakeholders Strong proficiency in open-source research and adversarial testing to uncover hidden vulnerabilities or risks in AI systems Experience managing or contributing to projects with organization-wide impact, involving cross-functional collaboration with diverse teams Ability to prioritize tasks and execute independently with minimal oversight Subject matter expertise or prior work experience with trust and safety challenges in AI systems, particularly generative AI models Experience conducting data analysis using Python, SQL, or similar tools to support investigations or red teaming efforts Familiarity with AI governance, ethical AI frameworks, or emerging regulatory standards for AI safety Experience collaborating with distributed teams across multiple locations or time zones MS or equivalent experience in Computer Science, Cybersecurity, AI Ethics, Data Science, or a related field This role is open to multiple locations across the US (including CA & NYC). Pay Transparency Information The expected annual base pay range(s) for this position are detailed below. Each base pay range is relevant only for individuals who are residents of or will be expected to work within the specified locale. Compensation varies based on a variety of factors, which include (but aren’t limited to) things such as skills and competencies, qualifications, knowledge, and experience. In addition to base pay, most employees are eligible to participate in Epic’s generous benefit plans and discretionary incentive programs (subject to the terms of those plans or programs). New York City Base Pay Range $170,135 — $283,558 USD California Base Pay Range $159,926 — $266,544 USD ABOUT US Epic Games spans across 25 countries with 46 studios and 4,500+ employees globally. For over 25 years, we've been making award-winning games and engine technology that empowers others to make visually stunning games and 3D content that bring environments to life like never before. Epic's award-winning Unreal Engine technology not only provides game developers the ability to build high-fidelity, interactive experiences for PC, console, mobile, and VR, it is also a tool being embraced by content creators across a variety of industries such as media and entertainment, automotive, and architectura