Vantage Launches LLM Token Allocation in Private Preview
Blog post from Vantage
Vantage has introduced LLM Token Allocation in private preview, a feature that enables organizations to allocate AI model provider costs to specific metadata such as team, user, or application, thereby improving the transparency and management of AI expenditures. This new capability allows customers to connect token observability data or upload their own consumption data, which Vantage enriches with application-level metadata for use in Cost Reports and Virtual Tags. As AI-powered tools become more prevalent, tokenized LLM usage is generating significant variability in engineering costs, impacting both Cost of Goods Sold (COGS) and R&D budgets. To address this, Vantage facilitates the connection of consumption data sources like Amazon CloudWatch and Datadog to automatically allocate costs to attributes beyond those provided in standard billing data. This enrichment process ensures that enriched cost and usage data flow seamlessly into various Vantage reporting tools without the need for additional infrastructure. Currently available for OpenAI and AWS Bedrock integrations, Vantage plans to expand support to more model providers in the future, offering organizations a comprehensive strategy to track and optimize their AI-related expenditures.