LlamaIndex fans were treated to a roundup of exciting updates and resources, including the announcement of Microsoft's Phi-3 Mini model, which received day 0 support through Ollama, and enhancements to the create-llama application generator that now supports Llama 3 and Phi-3, allowing users to build apps quickly. Notable features released include Jina AI's open-source rerankers and a Llama Pack implementation of the innovative Language Agent Tree Search (LATS) technique. The community was also introduced to Memary, a reference implementation for using long-term memory in knowledge graphs, and had access to guides on building context-augmented research assistants and corrective retrieval evaluation modules. Tutorials covered building top-tier RAG applications with technologies like Qdrant, Jina AI embeddings, and AWS, complemented by a comprehensive 9-part series on taking RAG from prototype to production. Additionally, the community engaged through webinars and podcasts, with co-founder Simon discussing security in LLM applications and a new user group launched in Korea.