The recent paper "Tracking Everything Everywhere All at Once" proposes a novel motion estimation technique called OmniMotion, which tackles the challenges of traditional methods by representing the scene as a quasi-3D canonical volume. This approach captures camera and scene motion without explicit disentanglement, enabling globally cycle-consistent 3D mappings and tracking of points even when temporarily occluded. The OmniMotion method employs 3D bijections to establish continuous bijective mappings between 3D points in local coordinates and the canonical 3D coordinate frame, ensuring spatial and temporal coherence. It also proposes a new test-time optimization method for estimating dense and long-range motion from a video sequence, allowing for accurate and full-length motion estimation for every pixel in the video. The method is evaluated on various benchmarks and outperforms other approaches in position accuracy, occlusion accuracy, and temporal coherence, while struggling with rapid and highly non-rigid motion and thin structures.