voyage-3 & voyage-3-lite: A new generation of small yet mighty general-purpose embedding models
Blog post from Voyage AI
Voyage has introduced its latest embedding models, voyage-3 and voyage-3-lite, which significantly enhance retrieval quality, latency, and cost-efficiency compared to existing models like OpenAI v3 large. The voyage-3 model outperforms OpenAI v3 large by 7.55% across various domains such as code, law, finance, multilingual, and long-context, while reducing costs by 2.2 times and embedding dimensions by three times, leading to lower vectorDB costs. Additionally, voyage-3-lite offers 3.82% better retrieval accuracy than OpenAI v3 large at a fraction of the cost and supports a 32K-token context length, four times that of OpenAI. Both models are part of the Voyage 3 series, which follows the Voyage 2 series known for its domain-specific models, and aim to provide superior performance and affordability in data retrieval tasks. They have been rigorously evaluated across 40 domain-specific datasets and 26 languages, demonstrating particularly strong performance in multilingual contexts while maintaining lower costs and latency compared to competitors.