Langfuse has developed the @observe() decorator to simplify tracing and evaluating complex LLM applications in Python, particularly useful for those involving numerous LLM calls and non-LLM inputs. Initially developed in response to the challenges faced during Y Combinator with web scraping and code generation agents, Langfuse aims to provide LLM-focused observability by abstracting away the complexity of creating and nesting traces. The decorator integrates seamlessly with frameworks like LangChain, LlamaIndex, and the OpenAI SDK, capturing function calls, arguments, outputs, and exceptions while maintaining the nesting hierarchy. It supports async environments but has limitations with Python’s ThreadPoolExecutors and ProcessPoolExecutors due to context management issues. Inspired by tools like Sentry and Modal, the decorator reuses the low-level SDK for tracing without impacting application performance and is part of a broader strategy to enable teams to experiment across frameworks while using Langfuse as a central platform for observability and evaluation. Currently available for Python, plans are in place to extend this functionality to JavaScript/TypeScript.