Grafana Labs has updated its LLM plugin to provide users with greater flexibility in selecting large language models (LLMs) and providers that best suit their needs, emphasizing its commitment to a "big tent" philosophy. With the 0.10.0 release, the plugin now supports open-weights models and non-OpenAI providers, allowing users to integrate self-hosted or alternative LLMs via OpenAI-compatible APIs. This update enhances Grafana's generative AI capabilities, enabling features such as annotating dashboards, generating incident summaries, and explaining log patterns. The plugin's expanded support for different LLMs is aimed at balancing performance, privacy, and cost, and it includes options for deploying both base and large models depending on user requirements. Benchmarking tests have been conducted to ensure compatibility and performance across diverse LLMs, revealing that larger models generally offer better outcomes. Looking forward, Grafana aims to further integrate LLM functionalities, such as function calling and tool integration, to enhance user interaction with Grafana through various platforms and clients.