Ludwig v0.7 introduces a data-centric and declarative interface that simplifies fine-tuning computer vision and NLP models using YAML configurations, featuring significant enhancements such as pretrained TorchVision models, large-scale image augmentation with Ray AIR, and 50x faster fine-tuning of large language models through mixed precision and embedding caching. This release also includes new distributed training strategies, leveraging PyTorch's Distributed Data Parallel (DDP) and Fully Sharded Data Parallel (FSDP) methods, enabling more efficient handling of large models and maximizing GPU utilization. Pretrained TorchVision models, now expanded to over 20, allow users to leverage ImageNet-trained weights for improved performance on custom datasets, while image augmentation increases training dataset size through randomized transformations. Ludwig v0.7 optimizes fine-tuning by introducing automatic mixed precision training and cached encoder embeddings, resulting in significant speed improvements, especially for non-trainable encoders. Enhanced compatibility with Ray 2.3 and the new Ray AI Runtime (AIR) further supports scalable distributed training and preprocessing.