LlamaIndex's latest newsletter introduces updates on products like LlamaCloud and LlamaExtract, which are designed to enhance structured data extraction from unstructured documents, benefiting retrieval-augmented generation (RAG) and agent pipelines through both UI and API interfaces. LlamaExtract has been launched in beta, allowing for integration with Pydantic objects for structured extraction at chunk or document levels with real-time JSON output visualization, as well as supporting asynchronous operations and streaming. The newsletter also highlights partnerships with Ollama for tool calling, enabling the use of local models like llama3.1, and offers day-0 support for developing LLM applications with Mistral Large-2. It provides a range of guides and tutorials, including automated structured extraction for RAG and building multi AI agent systems, alongside webinars on efficient document retrieval with vision language models.