Ultra-long sequence training is crucial for enhancing model generalization in complex scenarios, but managing memory usage is a significant challenge due to the O(N^2) complexity in Attention computation. Techniques like Sequence Parallelism (SP), particularly Ulysses and Ring-Attention, help address this by splitting input sequences into sub-sequences for parallel computation, reducing memory demands on GPUs. Ulysses, developed by the DeepSpeed team, leverages sequence splitting to distribute Attention Heads across GPUs, while Ring-Attention employs block-wise parallel computation, optimizing memory usage through efficient communication strategies. These methods can be combined to maximize memory efficiency, as demonstrated in tests on the Qwen2.5-3B model, where splitting sequences into multiple parts significantly reduced memory requirements. The SWIFT framework integrates these techniques, allowing for scalable training on GPUs with limited memory, and offers adaptability for multimodal models and padding-free sequences. Ongoing research aims to optimize backward propagation and communication efficiency further, with community contributions encouraged to enhance long sequence training capabilities.