MongoDB.local San Francisco 2026: Ship Production AI, Faster
Blog post from MongoDB
At MongoDB.local San Francisco, MongoDB announced new capabilities aimed at bridging the gap between AI prototypes and production, focusing on practical challenges such as maintaining conversational context and efficient data retrieval. The company introduced the Voyage 4 model family, featuring cross-model compatibility and a new open-weight model available on Hugging Face, enhancing AI search experiences. MongoDB also unveiled the Embedding and Reranking API on MongoDB Atlas and Automated Embedding for MongoDB Community Edition, which simplifies semantic search and eliminates the need to manage separate systems. Additionally, Lexical Prefilters for Vector Search were launched to improve text filtering alongside vector operations. An intelligent assistant is now integrated into MongoDB Compass and Atlas, offering tailored, in-app guidance for developers. Finally, the mongot engine, which powers MongoDB Search and Vector Search, is now available under SSPL, allowing developers to contribute to its development. These updates emphasize MongoDB's commitment to providing a robust, scalable data platform that supports rapid AI development and deployment without the overhead of managing database infrastructure.