Instrument your LLM calls to analyze AI costs and usage
Blog post from Tinybird
In recent years, Tinybird has increasingly integrated AI features into its platform, transforming it into an AI-native product with the latest release. These enhancements, powered by large language models (LLMs), automate tasks like project creation, test writing, mock data generation, and API iteration. Tinybird uses a dual-backend system: Python for complex AI functionalities and TypeScript for user experience improvements. The company employs Vertex AI and libraries like LiteLLM and Vercel AI SDK to facilitate AI feature implementation. Instrumentation of AI usage is critical for monitoring performance, costs, and user engagement, enabling the optimization of LLM performance and cost across various providers. Tinybird tracks detailed metrics through its Events API, allowing for real-time analysis and adjustments. A web app developed by Tinybird's team for internal use visualizes these metrics, and plans are in place to release it as an open-source tool. This comprehensive approach to AI instrumentation positions Tinybird to effectively navigate the evolving AI landscape by optimizing resource utilization and enhancing performance.