We successfully finetuned the Mistral-7B, Falcon-7B, and Zephyr-7B large language models using Monster Tuner to outperform state-of-the-art (SOTA) models in various benchmarks such as Average, ARC, Hellaswag, and TruthfulQA. The finetuned models demonstrate superior performance compared to the pre-trained base models, with Mistral-7B achieving the highest average score of 47.04, closely followed by Zephyr at 46.86. This approach showcases significant cost-effectiveness and efficiency in fine-tuning language models without requiring extensive coding knowledge or setup complexity. The results highlight the potential use cases of this no-code LLM finetuner for natural language understanding and AI applications.