Eliminating Hallucinations in LLM-Driven Virtual Agents
Blog post from Vonage
Shir Hilel, a Machine Learning Engineer at Vonage, discusses improvements made in Vonage AI Studio to address the issue of hallucinations in virtual agents powered by large language models (LLMs). The AI Studio, a low-code platform for managing virtual agents, faced challenges with LLM hallucinations—erroneous outputs not grounded in system configuration or user inputs. To improve accuracy and reliability, Vonage introduced structured reasoning fields and schema order refinements, which resulted in a significant reduction in error rates from 23.7% to 1.0%. These changes include guiding LLMs with auxiliary reasoning fields, reordering JSON fields to influence the model's reasoning path, and employing regex validation to ensure parameter format accuracy. This approach has enhanced the consistency and predictability of the LLM's outputs, aligning them more closely with the system's expected behavior and capabilities, while maintaining the model's ability to correctly identify valid inputs.