All the Hard Stuff Nobody Talks About when Building Products with LLMs
Blog post from Honeycomb
Earlier this month, Honeycomb released Query Assistant, a natural language querying interface that allows users to express desired queries in plain English, which the system then translates into Honeycomb queries. The development of Query Assistant involved addressing several challenges associated with integrating large language models (LLMs), such as managing context windows, ensuring prompt engineering effectiveness, and dealing with latency issues inherent to commercial LLMs like GPT-3.5-turbo. The team focused on creating a user-friendly product rather than simply wrapping an LLM interface, emphasizing the importance of prompt engineering to accommodate broad and vague inputs while implementing measures to mitigate prompt injection risks and ensuring non-destructive and undoable actions. Additionally, the project required significant legal and compliance considerations, especially for privacy-sensitive customers, and highlighted the inadequacy of early access programs in fully testing the robustness of such features. Ultimately, the effort underscored that LLMs should be seen as engines for features rather than standalone products, requiring careful integration into existing product frameworks.