So We Shipped an AI Product. Did it Work?
Blog post from Honeycomb
Honeycomb's integration of Large Language Models (LLMs) through their Query Assistant has been a strategic move to enhance user engagement and streamline querying processes on their platform. Initially launched as an experimental feature, Query Assistant quickly evolved into a core product after iterative improvements informed by production data. Despite its success in boosting activation metrics and remaining cost-effective due to OpenAI's pricing, it did not meet all expectations, particularly in engaging free-tier users. The tool has proven effective in facilitating the learning curve for new users by converting natural language inputs into complex queries, thereby encouraging more manual and sophisticated querying over time. Operationally, the use of GPT-3.5-turbo keeps costs low while maintaining functionality, and the tool has been praised by customers and sales teams alike for its role in onboarding and engaging prospects. Although some initial limitations were noted, such as its discoverability and feature set, user feedback has been instrumental in refining the tool, making it a valuable addition to the Honeycomb platform.