Phi-3 support, Llama 3 bug fixes, Mistral v0.3
Blog post from Unsloth
Unsloth has announced support for Phi-3 models and resolved issues affecting Llama 3 finetuning, recommending the use of their base or Instruct notebooks for optimal performance. The company has made significant improvements in model speed and memory efficiency, notably making Phi-3 models 2x faster with 50% less memory usage and Mistral v0.3 models 2.2x faster with 73% less VRAM. They have addressed common misconceptions and technical problems with Llama 3 finetuning, such as double BOS tokens and quantization issues, and offer auto-fixes through their platform. Unsloth's Mistral-fied Phi-3 models have shown comparable accuracy to the original versions, with optimizations allowing for longer context lengths. The company has also been selected for GitHub's annual Accelerator Program, marking a significant milestone in their growth, and encourages community engagement through platforms like Discord and Ko-fi.