The authors of the blog post fine-tuned a Stable Diffusion model on the Heroicons library to create a generative icon model. They used Modal, a scalable serverless cloud computing platform, to run the fine-tuning training script and deploy the fine-tuned model as an interactive web app. The authors explored different fine-tuning techniques, including full fine-tuning, sequential adapter fine-tuning, and parallel adapter fine-tuning. They found that full fine-tuning worked best for their use-case but also discussed the advantages of parallel adapter fine-tuning methods like LoRA. The authors prepared the dataset by downloading the Heroicons from GitHub, converting them to PNGs, adding white backgrounds, generating captions, and uploading the dataset to the HuggingFace Hub. They then set up a Modal account, created a `TrainConfig` class to hold training hyperparameters, and defined an `AppConfig` class to store inference hyperparameters. The authors fine-tuned the Stable Diffusion model on the Heroicons library using the `train_text_to_image.py` script and saved the fine-tuned model in a Modal Volume. They then mounted the volume to a new Modal `inference` function, which they used to generate icons based on user input. Finally, the authors set up a Gradio UI that called the `Model.inference` function and deployed the app on Modal using one command. The fine-tuned model was evaluated for its performance as an infinite icon library, and the authors discussed the challenges of fine-tuning and the potential of grid searches to scale up the process.