Fine-tuning large language models (LLMs) is a crucial process for customizing them to better suit specific applications or user expectations, such as altering reasoning and output forms or focusing on particular domains like code generation. This process involves updating the model’s weights or adding new ones, often with datasets like CodeAlpaca, which provides 20,000 sets of labeled instructions for code generation tasks. Despite historical cost barriers, advancements have made fine-tuning more accessible and affordable, as demonstrated by running a Llama-2 model with 7 billion parameters on a single T4 instance using Ludwig's declarative machine learning approach. The fine-tuned model showed improved performance on both domain-specific and generic prompts without suffering from catastrophic forgetting, highlighting the potential of fine-tuning for enhancing LLMs' capabilities. Future efforts will focus on refining this process for larger models, optimizing cost and performance benchmarks, and efficiently serving these models in production environments, with platforms like Predibase offering managed services for deploying fine-tuned LLMs in cloud settings.