Fine-tune & Run Gemma 3
Blog post from Unsloth
Gemma 3, a state-of-the-art multimodal model developed by Google, is now supported by Unsloth, offering advanced features such as a 128K context window and multilingual capabilities. Gemma 3 is available in various sizes, ranging from 270M to 27B, and Unsloth facilitates faster fine-tuning with reduced VRAM usage. Unsloth addresses challenges associated with float16 mixed precision by employing a unique approach to manage activations and matrix multiplications, making it compatible with GPUs like T4 and RTX 20x series. The platform supports a wide range of transformer-style models and training algorithms, and its dynamic 4-bit quantization enhances accuracy, particularly for vision models. Unsloth's optimizations result in significant VRAM savings and speed improvements, with future updates promising even more capabilities, including multi-GPU support.