Ludwig v0.8, an open-source, low-code framework originally released by Uber, introduces a suite of features designed to facilitate the customization and fine-tuning of large language models (LLMs) for both generative and predictive tasks. This version allows developers to leverage tools such as the "llm" model type for creating text-based AI systems like chatbots and code assistants, with compatibility for HuggingFace transformers. It also integrates with Deepspeed to handle large models across multiple GPUs, incorporates parameter efficient fine-tuning techniques like Low-rank adaptation (LoRA), and supports quantized training on a single GPU. Ludwig v0.8 enhances prompt templating for context-specific model responses, zero-shot, and in-context learning to minimize labeled data requirements, and introduces retrieval-augmented in-context learning for improved prediction performance. The release aims to streamline the process of building and deploying LLMs by resolving infrastructure challenges, allowing users to concentrate on model development. Additionally, Ludwig v0.8 offers new integrations with technologies such as Daft for improved preprocessing speed and compatibility with PyTorch 2.0, encouraging community engagement and contributions for future enhancements.