Home / Companies / Together AI / Blog / Post Details
Content Deep Dive

Fine-tuning API: Introducing long-context training, conversation data support and more configuration options

Blog post from Together AI

Post Details
Company
Date Published
Author
Max Ryabinin, Artem Chumachenko, George Grigorev, Arsh Zahed, Gleb Vazhenin
Word Count
1,726
Language
English
Hacker News Points
-
Summary

The Fine-tuning API has introduced new features, including long-context training, conversation data support, and more configuration options. These updates aim to enhance the performance of specific tasks by allowing ML teams to customize open models easily. Longer-context fine-tuning supports up to 32K context length for Llama 3.1 8B and 70B fine-tuning and inference, while conversation and instruction data format support streamline data preparation. Training quality improvements have been made without any changes in hyperparameters, inputs, or cost of fine-tuning jobs. Validation dataset support allows users to monitor the loss of the model on unseen data during training. Quality-of-life enhancements include enhanced Weights & Biases integration and automated batch size setting.