Retrieval augmented generation (RAG) is a technique used to enhance large language models (LLMs) by retrieving relevant documents to provide context for responses, but it can still result in inaccuracies or hallucinations where information is unsupported by the retrieved materials. Hallucinations in RAG can be categorized into evident or subtle conflicts and introductions of baseless information, whether through fabrication or inference beyond available data. To ensure effective RAG implementation, systems should focus on two main pillars: RAG Document Relevance, ensuring that the retrieved documents are pertinent, and RAG Groundedness, ensuring that LLM responses are consistent with the retrieved context. Evaluation frameworks like the Ragas library provide tools to measure context recall and precision, and assess response faithfulness and relevancy. In platforms like n8n, RAG performance can be evaluated without external libraries, using native evaluation metrics to determine document relevance and answer groundedness, helping to refine workflows and improve alignment between LLM responses and retrieved documents.