Home / Companies / Together AI / Blog / Post Details
Content Deep Dive

Fine-Tuning Platform Upgrades: Larger Models, Longer Contexts, Enhanced Hugging Face Integrations

Blog post from Together AI

Post Details
Company
Date Published
Author
Artem Chumachenko, Maksim Abraham, Soroush Bassam, Gleb Vazhenin, Egor Timofeev, Conner Manuel, Zain Hasan, Will Van Eaton, Max Ryabinin
Word Count
1,410
Language
English
Hacker News Points
-
Summary

Together AI's Fine-Tuning Platform enhances AI developers' ability to customize large language models (LLMs) by offering tools that streamline the training process, allowing for the fine-tuning of models on domain-specific data to improve task performance while reducing costs and latency. The platform supports a range of large models, including those with over 100 billion parameters, and facilitates handling long contexts, crucial for tasks such as long-document processing. Integrations with the Hugging Face Hub enable developers to fine-tune existing models or upload their own, fostering a seamless workflow for model training and deployment. The platform also introduces advanced training objectives for preference optimization and offers convenience features like automatically setting the batch size to maximize efficiency. Together AI's advancements aim to make sophisticated model training more accessible and cost-effective, encouraging developers to integrate fine-tuning into their AI development cycle.