The article explores an advanced system called Agentic RAG, which enhances the reliability of Retrieval-Augmented Generation (RAG) by integrating a Trustworthy Language Model (TLM) to assess and improve the trustworthiness of generated responses from large language models (LLMs). This system employs an agent to orchestrate various retrieval strategies, dynamically escalating complexity as needed to ensure accurate responses without excessive latency or costs. TLM provides a quantitative trustworthiness score for responses, enabling the system to identify unreliable outputs and adapt its retrieval approach to improve context, thereby reducing the occurrence of AI hallucinations. Examples demonstrate the system's ability to handle both simple and complex queries by optimizing retrieval strategies, ensuring the delivery of high-quality, trustworthy answers across applications while effectively managing computational resources.