LangChain offers advanced retrieval methods to enhance the retrieval-augmented generation (RAG) process by addressing challenges like irrelevant content in document chunks, poorly worded user queries, and the need for structured query generation. This approach leverages large language models (LLMs) to perform query transformations, enabling the rewriting of user queries and generating search terms that improve retrieval accuracy. Strategies include multi-representation indexing, query transformation to enhance the user's original question, query construction for specific query syntax, and multi-query retrieval that generates multiple sub-queries for complex questions. By using LLMs, these methods introduce novel possibilities in query transformation, relying heavily on the prompts used to guide the LLMs, and opening new avenues for prompt engineering to optimize retrieval results.