We spent 8 years making vector databases faster. Then we stopped.
Blog post from Zilliz
Zilliz spent eight years optimizing vector databases for faster and more predictable search capabilities, but the focus has now shifted to balancing performance with cost-effectiveness in response to evolving user needs driven by AI advancements. The introduction of Zilliz Vector Lakebase addresses the inefficiencies of always-on compute for data that is infrequently accessed by allowing semantic data to persist independently of a continuous serving cluster. This approach provides on-demand computation that supports multiple compute lifecycles, offering a more dynamic cost model for workloads that are mostly idle. The system leverages innovations such as quantization for reduced cold start times, IVF clustering to minimize data scanning, and advanced storage formats to alleviate I/O amplification, resulting in a scalable, flexible, and economic solution that aligns with the demands of modern AI applications.