OpenAI has released their third generation of text embeddings models, `text-embedding-3-small` and `text-embedding-3-large`, which outperform their previous model on both MTEB and MIRACL benchmarks. These new models feature a significant update: the ability to "shorten" their dimensions, allowing for more flexible querying and indexing. This is made possible by Matryoshka Representation Learning (MRL), a training technique that embeds information at multiple granularity levels within a single high-dimensional vector. By shortening embeddings, users can speed up vector search with Adaptive Retrieval, which uses two passes: a first pass using a low-dimensional representation and a second pass using the full-size embedding. The optimal dimension size for the first pass is found to be 512 dimensions, resulting in fast query speeds while maintaining high accuracy. Additionally, the sub_vector function allows users to dynamically truncate embeddings to any size, and an index on the documents table can be created to speed up the first pass. While shorter vectors are faster, decreasing accuracy requires loading more records, which impacts speed more than increasing dimension size.