Tuna is a no-code tool designed to enable the rapid creation of high-quality fine-tuning datasets for large language models (LLMs) like GPT-3.5-turbo and LLaMa-2, facilitating the process of training models for specific applications or domains. Available via a web interface and a faster Python script, Tuna allows users to generate prompt-completion pairs by inputting a CSV file of text data, which is processed through OpenAI's API to minimize hallucination. Fine-tuning LLMs is valuable for adapting them to particular tasks, such as legal writing or conversational formats, by specializing their responses and enhancing their performance on smaller, self-hosted models. While fine-tuning can be resource-intensive due to the need for high-quality datasets, Tuna lowers these barriers by automating the generation of synthetic datasets. This tool supports various configurations for dataset creation, including SimpleQA, MultiChunk for retrieval-augmented generation (RAG), and custom prompts, providing flexibility in tailoring data for specific fine-tuning purposes. Fine-tuning can improve response reliability and formatting, though its efficacy in embedding new information remains debated, with RAG often providing a more practical solution.