
Tora: Trajectory-oriented Diffusion Transformer for Video Generation
CVPR 25
Alibaba Group
*Equal Contribution
📢 Update: 🔥🔥 Our latest work, Tora2, has been accepted by ACM MM25. Tora2 builds on Tora with design improvements, enabling enhanced appearance and motion customization for multiple entities. See Project page: https://ali-videoai.github.io/Tora2_page/
Trajectory Control
Tora ensures the generated movements precisely follow the specified trajectories while authentically replicating the dynamics of the physical world.

Abstract
Recent advancements in Diffusion Transformer (DiT) have demonstrated remarkable proficiency in producing high-quality video content. Nonetheless, the potential of transformer-based diffusion models for effectively generating videos with controllable motion remains an area of limited exploration. This paper introduces Tora, the first trajectory-oriented DiT framework that integrates textual, visual, and trajectory conditions concurrently for video generation. Specifically, Tora consists of a Trajectory Extractor (TE), a Spatial-Temporal DiT, and a Motion-guidance Fuser (MGF). The TE encodes arbitrary trajectories into hierarchical spacetime motion patches with a 3D video compression network. The MGF integrates the motion patches into the DiT blocks to generate consistent videos following trajectories. Our design aligns seamlessly with DiT’s scalability, allowing precise control of video content’s dynamics with diverse durations, aspect ratios, and resolutions. Extensive experiments demonstrate Tora’s excellence in achieving high motion fidelity, while also meticulously simulating the movement of physical world.
Method
Overview of the Tora Architecture. For achieving trajectory-controlled DiT-based video generation, we
introduce two
novel modules: the Trajectory Extractor and the Motion-guidance Fuser. The Trajectory Extractor employs a 3D
motion VAE to
embed trajectory vectors into the same latent space as video patches, effectively preserving motion
information across consecutive frames. Subsequently, it uses stacked convolutional layers to extract
hierarchical motion features. The Motion-guidance
Fuser utilizes adaptive normalization layers to seamlessly inject these multi-level motion conditions into the
corresponding DiT
blocks, ensuring the generation of videos that consistently follow the defined trajectories. Our method aligns
with the scalability
of DiT, enabling the creation of high-resolution, motion-controllable videos with prolonged durations
BibTeX
@inproceedings{zhang2025tora, title={Tora: Trajectory-oriented diffusion transformer for video generation}, author={Zhang, Zhenghao and Liao, Junchao and Li, Menghao and Dai, Zuozhuo and Qiu, Bingxue and Zhu, Siyu and Qin, Long and Wang, Weizhi}, booktitle={Proceedings of the Computer Vision and Pattern Recognition Conference}, pages={2063--2073}, year={2025} }