Creating an effective Retrieval-Augmented Generation (RAG) pipeline that provides good responses in multiple languages can be more complicated than it initially appears, requiring a good chunking strategy, a state-of-the-art embedding model, and proper implementation. The choice of embedding model significantly impacts RAG performance, with Vectara's new Boomerang model outperforming OpenAI and Cohere models in some cases, especially in non-English languages like Hebrew and Turkish, where it retrieves relevant information from the data more effectively than its competitors. By using Boomerang integrated into Vectara's "RAG as a service" architecture, users can build effective GenAI applications with improved performance across multiple languages.