Behind New Relic AI’s ability to speak New Relic Query Language
Blog post from New Relic
As modern systems become more complex and distributed, understanding an organization's system architecture and effectively utilizing tools like New Relic requires overcoming a learning curve, especially with its proprietary query language, NRQL. New Relic AI addresses this by using large language models, such as OpenAI's GPT-4 Turbo, to translate natural language queries into NRQL, aiding both technical and non-technical stakeholders in analyzing telemetry data. The AI assistant employs techniques like prompt engineering and few-shot prompting to improve query accuracy, while a feedback loop helps correct syntactical errors. Despite these advancements, challenges such as syntax hallucinations and question ambiguity persist, necessitating strategies like context-driven understanding and ongoing performance optimization. The AI's ability to handle custom events and attributes in a flexible NRDB environment is balanced with considerations of cost and complexity, ensuring efficient resource use. Continuous improvement efforts focus on enhancing the AI assistant's capabilities, including more integrations for seamless insights and decision-making across organizations.