n8n's Redis vector store node: what you should know
Blog post from Redis
The Redis vector store node has been integrated into n8n, enabling users to incorporate vector search capabilities into their workflows by leveraging Redis for retrieval, semantic lookup, and caching. This integration allows existing Redis users to expand their use of the system to include vector workloads without needing additional databases. Redis is designed for fast, in-memory operations, which contributes to lower query latency and higher throughput compared to disk-based systems. Its architectural flexibility supports various data structures, enabling a single Redis instance to handle multiple functions such as vector search, chat memory, session state, and caching. Redis also supports hybrid search, combining vector and metadata queries for more precise results. Two reference workflows demonstrate the practical application of these features: one using Retrieval-Augmented Generation with GitHub issues and the other employing semantic caching to reduce LLM costs. The new Redis vector store node makes it easy for both existing and new users to implement and experiment with these capabilities in n8n.