Axolotl is a wrapper for lower-level Hugging Face libraries that simplifies the fine-tuning process of large language models, offering granular control while being easier to use. It comes with built-in default values and optimizations, including sample packing, which can improve training efficiency. Axolotl allows users to train open weights models like LLaMA 3/LLaMA 3.1, Pythia, and Falcon on their own data without needing to implement the fine-tuning process from scratch. Unsloth is a framework designed to dramatically improve the speed and efficiency of LLM fine-tuning, allowing users to fine-tune Llama 3.1, Mistral, Phi & Gemma LLMs up to 2-5x faster with 80% less memory usage compared to FA2. Torchtune is a PyTorch-native library for easily fine-tuning LLMs, offering a lean and extensible design that's just pure PyTorch, with excellent interoperability with popular libraries across the PyTorch ecosystem. The choice between these tools ultimately depends on specific requirements, hardware constraints, and level of expertise, with Axolotl being recommended as a good starting point for beginners.