There has been a lot of interest in fine-tuning open-source LLMs, and the author shares their insights and practical code on how to do it. Fine-tuning involves training an existing model on example input/output pairs to demonstrate the task you want your model to learn, with the goal of encoding instructions in the model's weights itself. This approach has advantages over prompting, including being more effective at guiding a model's behavior, but can be slower and require more data. However, it also offers significant cost savings, as fine-tuning a 7B model can be 50 times cheaper than using GPT-3.5/4 on a per-token basis, with examples showing cost reductions of up to $22k for tasks like recipe classification.