Snorkel AI leverages Prefect to enhance its AI operations by transitioning from a homegrown system to a more efficient orchestration platform, addressing challenges of scaling, visibility, and technical debt. Initially, Snorkel used Redis Queue for asynchronous processing, which became insufficient as their machine learning workloads grew more complex. Prefect's incremental adoption capabilities allowed Snorkel to migrate workflows gradually, solving issues related to network bottlenecks, resource isolation, and observability without overhauling their architecture completely. The implementation of Prefect facilitated a 20x throughput improvement in LLM prompting jobs, reduced the need for custom infrastructure, and provided robust features like task-level caching, rate limiting, and error handling. By self-hosting Prefect on their Kubernetes infrastructure, Snorkel maintained control over their orchestration while streamlining various workload processes, from financial document classification to real-time quality checks. Prefect's user-friendly interface and expressiveness in handling asynchronous processes have proven beneficial for Snorkel, allowing them to scale efficiently and reduce maintenance burdens.