Company
Date Published
Author
Abhay Malik
Word count
616
Language
English
Hacker News points
None

Summary

Predibase has announced a major release that includes a new fine-tuning stack, significantly boosting training speeds by up to tenfold and introducing several enhancements to their platform. This release incorporates the addition of Llama-3 models for inference and fine-tuning, along with the introduction of Adapters as a primary mechanism for model fine-tuning. The updated fine-tuning system, independent of Ludwig but intended for open-sourcing in the future, utilizes advanced techniques such as optimized CUDA kernels and flash attention to maximize throughput. Furthermore, a new Python SDK has been launched to improve usability and consistency, replacing the previous version which will be fully deprecated by May 2024. The release aims to democratize access to high-quality large language models (LLMs) for organizations by offering a fast, efficient, and user-friendly fine-tuning experience, supported by a free trial with $25 credits.