Home / Companies / Vectara / Blog / Post Details
Content Deep Dive

Measuring Hallucinations in RAG Systems

Blog post from Vectara

Post Details
Company
Date Published
Author
Shane Connelly
Word Count
1,149
Language
English
Hacker News Points
-
Summary

Vectara has launched an open-source Hallucination Evaluation Model (HEM) to help enterprises assess and mitigate the risk of hallucinations in generative AI, particularly in Retrieval Augmented Generation (RAG) systems. Hallucinations, which can negatively impact businesses, include generating incorrect or biased information and producing copyrighted content. The HEM model aims to evaluate how well large language models (LLMs) summarize data without hallucinations, assisting companies in choosing the most reliable LLMs for their needs. The model and its corresponding evaluation scores are available on Vectara's Hugging Face account, facilitating enterprises in customizing the model under an Apache 2.0 license. Vectara's initiative includes a "hallucination scorecard" for various LLMs, which measures answer rate, accuracy, hallucination rate, and average summary length. The company plans to integrate these capabilities into its platform and is committed to reducing hallucination rates further while collaborating with the community to enhance the model continually.