MTEB v2, the latest iteration of the Massive Text Embedding Benchmark, introduces a host of new features aimed at improving the evaluation of embedding and retrieval systems. It now supports a broader range of embedding tasks, including multimodal models and non-embedding-based retrieval systems, with enhancements such as a consistent interface, better typing, and comprehensive documentation. This update addresses the bloating issues experienced in the previous version by implementing a large-scale refactor for better maintainability and expansion. Key features include the ResultCache for easier caching and results loading, support for CrossEncoders, and a unified approach for retrieval, reranking, and instruction variants. The introduction of a new SearchProtocol and improved documentation aims to streamline the search process and enhance the usability of MTEB. Additionally, MTEB v2 offers improved support for error analysis and descriptive statistics, facilitating better quality checks and the saving of model predictions for deeper analysis. The upgrade process from v1 to v2 involves replacing deprecated methods with new, more efficient ones, while ensuring backward compatibility and support for Datasets v4.