Video editing according to instructions is a highly challenging task due to the difficulty in collecting large-scale, high-quality edited video pair data. This scarcity not only limits the availability of training data but also hinders the systematic exploration of model architectures and training strategies. While prior work has improved specific aspects of video editing (e.g., synthesizing a video dataset using image editing techniques or decomposed video editing training), a holistic framework addressing the above challenges remains underexplored.
In this study, we introduce InstructVEdit, a full-cycle instructional video editing approach that: (1) establishes a reliable dataset curation workflow to initialize training, (2) incorporates two model architectural improvements to enhance edit quality while preserving temporal consistency, and (3) proposes an iterative refinement strategy leveraging real-world data to enhance generalization and minimize train-test discrepancies.
Extensive experiments show that InstructVEdit achieves state-of-the-art performance in instruction-based video editing, demonstrating robust adaptability to diverse real-world scenarios. Codes, models, and datasets will be released to facilitate further research.
Make the style Minecraft.
Original
Tune-a-Video
AnyV2V
TokenFlow
InsV2V
Ours
Make it snowy day.
Original
Tune-a-Video
AnyV2V
TokenFlow
InsV2V
Ours
Change the cat to be made of paper.
Original
Tune-a-Video
AnyV2V
TokenFlow
InsV2V
Ours