The Google Gemma 2B base model was fine-tuned using MonsterTuner's no-code LLM fine-tuner, resulting in improved performance across various benchmarks. The fine-tuning process utilized a high-quality dataset known as "No Robots," which is specifically designed for supervised fine-tuning to improve language models' ability to follow instructions effectively. The fine-tuned model shows significant improvements in average performance compared to the base model and rivals the instruction-tuned variant, demonstrating enhanced capabilities in complex reasoning tasks. The experiment highlights the potential of smaller models when optimized effectively, rivaling the performance of larger models in specific tasks.