Relevance Feedback in Qdrant
Blog post from Qdrant
Relevance Feedback Query is introduced as an innovative, scalable method for enhancing search result relevance in vector search engines, addressing the lack of universal interfaces for relevance feedback in the industry. This approach capitalizes on full access to vector search indices, allowing for traversal in the direction of relevance using feedback signals from users or models, unlike previous methods limited to reranking. The method involves collecting feedback on a small sample of documents to adjust scoring formulas, aiming to optimize the balance between speed, cost, and quality in neural search. Experiments demonstrated that feedback-based scoring can improve the recall of relevant documents compared to traditional retrievers, especially when the feedback model provides distinct insights that the retriever's simpler models may miss. This tool, released in Qdrant 1.17.0, is designed to be cheap, adaptable, and universal, working across various data types and ensuring the relevance feedback impacts the entire vector space. The implementation is supported by a Python package that customizes scoring formulas based on user datasets and feedback models, with results evaluated using metrics like Discounted Cumulative Gain.