
AI Security Engineer
at Perplexity AI
Posted 12 hours ago
No clicks
- Compensation
- Not specified
- City
- Not specified
- Country
- Not specified
Currency: Not specified
Perplexity is seeking a hands-on AI Security Engineer to design and implement security mechanisms for self-hosted models, LLM APIs, agents, MCPs, and the core AI stack. The role involves threat modeling, threat simulations, cybersecurity evaluations, and leading AI security assessments in a cross-functional environment. It is a hands-on role with at least 50% time spent on remediation, working closely with peer engineers to drive security remediations, and empowering developers with tools and guidance. Tech stack includes Python, NextJS, TypeScript, Docker, AWS, Kubernetes, and PostgreSQL.
Perplexity is seeking a highly skilled, experienced, and hands-on AI Security Engineer to join our security team, driving the protection of next-generation AI systems against adversarial threats. In this role, you’ll design and implement robust mechanisms to secure self-hosted models, LLM APIs, agents, MCPs, and the core AI stack. You'll empower developers with tools and guidance, as well as technical contributions, enabling innovation while ensuring AI security is strong by default.
Our tech stack includes Python, NextJS, TypeScript, Docker, AWS, Kubernetes, and PostgreSQL.
Responsibilities
Define, build, and refine mechanisms to secure AI systems (including self-hosted models, LLM APIs, agents, MCPs, and other core components of the AI stack) against adversarial behavior of all kinds
Understand technically complex AI systems, identify potential weaknesses in their architecture, and implement improvements
At least 50% of time performing hands-on remediation. Also working closely with peer engineers to drive remediations
Plan and carry out threat modeling activities and realistic threat simulations across our offerings
Conduct cybersecurity evaluations and lead AI security assessments in a cross-functional environment
Develop initiatives that improve our capabilities to effectively evaluate AI systems and enhance the organization's prevention, detection, response, and threat hunting capabilities
Provide guidance and education to developers to help deter and prevent threats
Qualifications
Hands-on coding and prompting experience.
Bachelor of Science or Master of Science in Computer Science or a related field, or equivalent experience
Be a technical and process subject matter expert regarding AI security services and attacker tactics, techniques, and procedures
Good understanding of LLMs, AI architecture patterns, machine learning models, and related technologies such as MCP
Understanding of application security principles and secure coding practices
Experience developing and implementing security procedures and policies
Strong problem-solving, project management, leadership, and communication skills
Self-motivated with a willingness to take ownership of tasks
4+ years of industry experience

