Mochi 1, an open state-of-the-art video generation model, was recently released by Genmo. The model can generate short video clips from text prompts and is designed to be fine-tuned for specific use cases with the help of LoRA (Low-Rank Adaptation), a technique that reduces memory requirements. By releasing scripts and sample code for fine-tuning Mochi 1 with LoRA, Genmo has made it easier for others to build on top of this foundation model. Fine-tuning allows for superior control of style and increased character consistency across generations, making it useful for specialized video content or proprietary data that defines a brand's style. The process can be resource-intensive, but LoRA reduces memory requirements, enabling fine-tuning on a single GPU with minimal data. Mochi 1 is designed to work seamlessly with Modal, a high-performance AI infrastructure platform that offers scalable resources and enterprise-grade security.