The widespread adoption of AI in 2024 will be driven by the need for scalable evaluation methods for large language models (LLMs), which was previously a major obstacle. To address this, deepset has developed an AI trust layer that provides visibility into LLMs and their output, including the Groundedness metric, which tracks the degree to which an answer is based on the underlying documents. The Groundedness metric can help improve LLM security, trust, and observability by quantifying the truthfulness of an LLM's output and identifying hallucinations. It also enables users to track the metric over time and use it to optimize their retrieval setup, reduce costs, and improve system performance.