Context
Blog post from LllamaIndex
Query Pipelines, a new declarative API in LlamaIndex, enables streamlined orchestration of query workflows, ranging from simple to advanced, across various data use cases such as Retrieval-Augmented Generation (RAG) and structured data extraction. At its core is the QueryPipeline abstraction, which integrates multiple LlamaIndex modules like LLMs, prompts, and query engines, allowing for the creation of computational graphs with callback support and compatibility with observability partners. This approach enhances the development of LLM workflows by reducing code complexity and improving readability, while also offering future capabilities like easy serializability and caching. The API supports sequential chains and directed acyclic graphs (DAGs) for more complex scenarios and encourages modular design by letting developers select the best components for their use cases. By providing a declarative interface for building LLM-powered pipelines, Query Pipelines aim to improve the developer experience and optimize performance in querying workflows.