Gemma-2B, a lightweight and state-of-the-art open model from Google, has been fine-tuned on MonsterAPI's No-Code LLM fine-tuner to achieve significant performance boosts in mathematical reasoning tasks. By optimizing Gemma-2B for this specific task, the model outperformed larger models like LLaMA 13B, achieving a remarkable score of 20.02 on the GSM Plus benchmark and boasting a 68% performance boost over its baseline model. This study demonstrates that smaller models can indeed outperform larger ones when fine-tuned for specific tasks, highlighting the importance of targeted optimization in enhancing model performance. The results have implications for NLP practitioners, who can benefit from fine-tuning their models to achieve better efficiency and accuracy in various applications.