Join ByteDance's Redis Family team to build AI-native distributed KV caching and memory systems at scale. Design and develop next-generation Redis core systems, multi-tier memory architectures, and AI-aware memory services with a focus on reliability, low latency, and high throughput. Contribute to production-grade automation, monitoring, disaster recovery, and data ecosystem integrations, while exploring hardware trends (ZNS, io_uring, RDMA/CXL) and AI+DB directions in production.
About the Team Join ByteDance's Redis Family team, where we build and operate AI-native distributed KV caching and memory systems powering ByteDance's global infrastructure. Beyond traditional caching, we are evolving toward a unified Memory Infrastructure Layer that supports high-performance Redis-compatible KV systems, persistent and tiered storage engines, LLM KV Cache acceleration infrastructure, and AI-aware memory services. Our systems serve mission-critical scenarios at massive scale — recommendation, search, ads, e-commerce, messaging, live streaming, and emerging AI-native applications — with strict requirements on availability, latency, throughput, global deployment, and cost efficiency. Responsibilities - Design and develop next-generation Redis Family core systems, including distributed KV caching, persistent memory storage, LLM KV cache infrastructure, and AI-aware memory services. - Build planet-scale reliability, leading or contributing to HA architecture, failure isolation, multi-AZ/multi-region disaster recovery, and large-scale stability engineering for always-on business workloads. - Architect and optimize multi-tier memory systems (in-memory / SSD / shared storage), reducing read/write amplification and improving tail latency under extreme concurrency. - Build a production-grade ecosystem, including automated orchestration operations (provisioning, scaling, placement, scheduling) and monitoring systems (tracing, profiling, incident response runbooks). - Implement and evolve capabilities such as Bulkload, backup & restore, point-in-time recovery, tiered storage, and integration with upstream/downstream data systems to enrich data ecosystems. - Research new hardware and new technologies, evaluate and land improvements using ZNS SSD, io_uring, RDMA/CXL, and "AI+DB" directions in production.