Using Agent Skills to Automatically Improve your Prompts
Blog post from Langfuse
The text discusses a method for improving prompts using an AI agent and the Langfuse skill, which involves analyzing trace feedback to enhance prompt quality iteratively. By annotating traces in Langfuse, users can provide feedback that the AI agent, demonstrated with the Claude model, analyzes to suggest prompt changes, rapidly progressing from a basic to a more effective prompt. This process involves examining traces, identifying errors or irrelevant responses, and categorizing them for better analysis. The example application used is a chatbot that searches GitHub discussions, illustrating how traces are recorded and evaluated. Claude assists in fetching scores, identifying gaps in prompts, and proposing improvements, with the iterative process continuing until most issues are resolved. The guide also highlights variations of this approach, such as using user feedback, annotation queues, or experiment results, and suggests creating a dataset for more structured evaluation to ensure prompt changes do not introduce regressions.