Naive RAG pipelines directly pass user questions to embedding models for retrieval, but this has limitations such as irrelevant content in chunks and poorly worded questions. LangChain addresses these challenges through advanced retrieval methods like multi representation indexing, query transformation, and query construction. Query transformation involves transforming humans' questions before passing them to the embedding model, which can be done using various approaches like rewriting, step back prompting, follow up question handling, and multi query retrieval. These transformations can improve retrieval by addressing issues such as irrelevant content, poorly worded questions, and generating structured queries from user questions. The choice of prompt for query transformation is crucial, and the differences in methods come down to the prompts used, which can be easily written but may require more thought to implement effectively.