Home / Companies / RunPod / Blog / Post Details
Content Deep Dive

Train Your Own Video LoRAs with Diffusion-Pipe

Blog post from RunPod

Post Details
Company
Date Published
Author
Brendan McKeag
Word Count
867
Language
English
Hacker News Points
-
Summary

Training your own LoRAs for Flux, Hunyuan Video, and LTX Video using tdrussells' diffusion-pipe involves setting up a pod with at least 48GB of VRAM, using the Better Comfy template for easy testing and access to VSCode, and configuring the training environment by cloning necessary repositories and setting up model and video directories. The training process requires uploading videos and corresponding text annotations to a specific folder, adjusting configuration files, and running the training script, which saves the LoRA every two epochs. Testing and rendering videos during training can be done using ComfyUI with the HunyuanVideo LoRA Select node, where users can adjust parameters like embedded_guidance_scale and flow_shift to influence creativity and frame movement. Experimentation with these variables is crucial as they significantly affect the video output, and maintaining a record of successful seeds can aid in achieving the desired results. Although initially daunting, familiarity with the process can make it straightforward, and assistance is available through Discord for any queries.