Home / Companies / Unsloth / Blog / Post Details
Content Deep Dive

Fine-tune & Run Qwen3

Blog post from Unsloth

Post Details
Company
Date Published
Author
Daniel & Michael
Word Count
430
Language
English
Hacker News Points
-
Summary

Qwen3 models, including Qwen3-30B-A3B, have been enhanced for improved reasoning, instruction-following, agent capabilities, and multilingual support, with the ability to fine-tune via the Unsloth platform using the newly developed Unsloth Dynamic 2.0 methodology. These advancements allow users to run and fine-tune quantized Qwen3 large language models (LLMs) with minimal accuracy loss, and the models support a native 128K context length thanks to the use of YaRN technology. The Unsloth platform makes fine-tuning 2x faster, reduces VRAM usage by 70%, and supports longer contexts than other environments via Flash Attention 2, enabling efficient deployment even on limited hardware resources. All versions of Qwen3, including dynamic 4-bit and GGUFs, are available on Hugging Face, and the platform supports various transformer-style models and training algorithms, enhancing the flexibility and accessibility of Qwen3 for diverse applications.