OpenAI's recent developer day unveiled significant advancements in large language models (LLMs), including the introduction of two new models: GPT-4-1106-preview, also known as GPT-4 Turbo, and GPT-4-vision-preview, which integrates multimodal support to comprehend both text and images. These models offer improved instruction following, JSON mode, reproducible outputs, and a 128,000 token context window, enhancing capabilities for developers. The latest version of LlamaIndex now supports these features, allowing users to incorporate them seamlessly through its library. In addition to these model updates, new embeddings abstractions, including Azure embeddings, have been introduced, along with enhanced function-calling capabilities. The SEC Insights demo highlights the power of retrieval-augmented generation for analyzing financial filings with the newest GPT-4 version, promising more insightful and relevant responses. Further updates from OpenAI are anticipated, continuing to advance the functionality and application of their models.