Home / Companies / MongoDB / Blog / Post Details
Content Deep Dive

Secure by Default: Mandatory MFA in MongoDB Atlas

Blog post from MongoDB

Post Details
Company
Date Published
Author
Alex Bauer
Word Count
2,517
Language
English
Hacker News Points
-
Summary

Vector quantization is a technique that compresses high-dimensional embeddings into compact representations while preserving their essential characteristics. This method addresses the challenges of large-scale AI workloads by reducing memory requirements, accelerating similarity computations, and lowering retrieval latency. By storing embeddings in reduced-precision formats (int8 or binary), organizations can dramatically cut memory usage and speed up retrieval, making vector quantization an indispensable strategy for high-volume AI applications. Quantization-aware training models, such as those from Voyage AI, are specifically designed to maintain accuracy while reaping cost savings at scale. With automatic scalar and binary quantization in index definitions, MongoDB Atlas supports "built-for-changing workloads" deployments, enabling large-scale vector workloads on smaller, more cost-effective clusters.