Senior Engineer, AI Agent Security Research
at OKX
Posted 7 hours ago
No clicks
- Compensation
- Not specified
- City
- Singapore
- Country
- Singapore
Currency: Not specified
Join OKX as a Senior Engineer specializing in AI agent security research. You will design and implement an AI-Driven Code Security Detection Engine with multi-agent collaboration, integrate RAG/CoT/Reflection, and build DevSecOps plugins for CI/CD pipelines. You will contribute to a security framework for large language model applications across input, output, and runtime layers, and help build observable, auditable agent systems. This role blends backend engineering, security research, and platform development in a blockchain/crypto environment.
Who We Are
What You’ll Be Doing
- AI-Driven Code Security Detection Engine
- Design and implement a multi-agent collaborative code auditing system covering vulnerability detection, malicious code identification, and sensitive information leakage scenarios; lead the role decomposition of Planners/Executors/Critics, tool invocation chains, and cross-agent state synchronization mechanism design.
- Integrate RAG, Chain-of-Thought, Reflection, and other technologies into security audit agents. Continuously optimize detection accuracy and recall rates while establishing a quantifiable evaluation and iteration framework.
- Deeply integrate with DevSecOps workflows. Develop plugins for mainstream pipelines like GitLab CI/CD, Tekton, and Jenkins to achieve “audit-on-commit.”
-
- AI System Security Protection and Threat Response
- Responsible for constructing a security protection framework for large language model applications, covering three dimensions: input layer (prompt injection, jailbreak detection), output layer (sensitive information leakage, compliance auditing), and runtime (tool invocation sandboxing, anomaly behavior circuit breaking).
- Develop Agent workflows for automated alert classification, contextual correlation, and false positive filtering. Integrate RAG-driven threat intelligence retrieval to generate automated analysis conclusions, supporting SOAR platform integration.
- Design human-machine collaboration intervention mechanisms and Agent behavior audit systems to ensure observability, traceability, and intervenability of Agent actions in production environments, adhering to industry standards like the OWASP Top 10 Risks for LLMs.
-
- Engineering Development and Platform Services
- Construct a highly available, scalable Agent service architecture supporting large-scale concurrent scanning task scheduling and fault tolerance.
- Oversee standardized API output for detection capabilities, building closed-loop systems for rule management, result visualization, and false positive feedback.
-
What We Look For In You
- Development Experience: 3+ years of backend development experience, proficient in at least one of Python/Go/Java, with a solid engineering foundation.
- Agent Implementation & Security: Hands-on experience deploying LLM Agents (not just demos), capable of detailing engineering challenges such as Agent architecture design, hallucination handling, and tool invocation fault tolerance; Hands-on experience with AI security, understanding risks like prompt injection, jailbreaking, malicious agent injection, and tool misuse, with implementable defense strategies.
- Framework Proficiency: Familiarity with at least one agent framework (LangChain, LlamaIndex, AutoGen, CrewAI, or LangGraph), with production project experience.
- Engineering Capabilities: Proficient in Docker and Kubernetes, with expertise in microservices architecture design and deployment.
Nice to Haves
- Security Tool Experience: Experience with SAST/SCA tools, or deep usage of code auditing tools like CodeQL, Semgrep, or SonarQube.
- Model Fine-Tuning: Experience with LLM fine-tuning (SFT, LoRA), or familiarity with local deployment and optimization of models like Llama 3, Qwen, or DeepSeek. Bonus points for security-domain fine-tuning experience, such as training and evaluating security detection models for malicious prompt detection, unauthorized access identification, or harmful content filtering.
- Open-Source Contributions: High-quality open-source projects related to agents on GitHub, or pull requests submitted to mainstream LLM frameworks.
- Security Competitions: Awards from CTF competitions, or a track record of submitting CVE/CNVD vulnerabilities.
Perks & Benefits
- Competitive total compensation package
- L&D programs and Education subsidy for employees' growth and development
- Various team building programs and company events
- Wellness and meal allowances
- Comprehensive healthcare schemes for employees and dependants
- More that we love to tell you along the process!
#LI-ML1 #LI-ONSITE

