ElevenLabs has released the Eleven Multilingual v2, a foundational deep learning model designed to support AI voice generation across 30 languages, marking the end of its beta phase. This advancement aims to dramatically improve content accessibility for media companies, game developers, publishers, and independent creators by eliminating language barriers. The model, developed entirely in-house after 18 months of research, can generate AI voices with emotional richness and accuracy, maintaining unique speaker characteristics across different languages. This release includes professional voice cloning capabilities with enhanced security features, allowing users to create indistinguishable digital copies of their voices. ElevenLabs' mission is to make all content universally accessible in every language and voice, thus enabling creators to produce culturally resonant content with reduced costs and resources. This technology is already being adopted in various creative sectors, including audiobooks, video games, and educational content, supported by partnerships with leading content creators and studios.