Carlos Aguilar, Hex's Head of Product, argues that traditional checklist-based evaluations for AI analytics tools are ineffective, emphasizing that success depends on understanding context management and real-world user interaction rather than merely comparing features. As conversational analytics begins to unlock self-service capabilities for business users, Aguilar suggests that data teams should lead evaluations by considering both end-user and data team experiences. This involves testing tools with real users and questions, improving context, and monitoring responses to ensure accuracy and relevance. The evaluation process should focus on how well a tool manages context, improves over time, and integrates into existing workflows, rather than relying on simplistic feature comparisons. Aguilar highlights the need for a thorough evaluation approach to determine if a tool will be effective within an organization, advocating for an understanding that conversational analytics can transition from an experimental phase to operational use.