This post discusses the challenges faced by Greptile, an AI code review bot, in reducing the number of comments it generated for pull requests. The initial approach was to use prompting techniques to improve the quality of comments, but this failed due to the limitations of large language models (LLMs) in evaluating their own output. Another attempt involved using LLMs as evaluators of comment severity, which also proved unsuccessful. However, a final approach involving clustering and filtering out comments that are similar to downvoted or upvoted comments by developers was successful in reducing the noise generated by the bot. This technique has resulted in an increase in the address rate of Greptile's comments from 19% to 55+%, providing a significant improvement in reducing unnecessary comments.