Home / Companies / Galileo / Blog / Post Details
Content Deep Dive

LLM Hallucination Index: RAG Special

Blog post from Galileo

Post Details
Company
Date Published
Author
Osman Javed
Word Count
302
Language
English
Hacker News Points
-
Summary

The Galileo's Hallucination Index, RAG Special, is a new benchmarking study that evaluates the performance of leading foundation models in real-world Retrieval-Augmented-Generation (RAG) use cases. The study tested 22 models from top brands like OpenAI, Anthropic, and Meta, measuring their context length impact on model performance. It also explored the open-source vs. closed-source software debate, with results suggesting that closed-source models may not necessarily perform better than open-source ones. The study used Galileo's Context Adherence Evaluation Model to measure LLM adherence to provided information, helping identify instances of hallucinations.