Fate/Zero: Fusing Attentions for Zero-shot Text-based Video Editing

ICCV 2023 Oral Presentation


1 HKUST     2 Tencent AI Lab     3 CAIR, CAS

TL;DR: Editing1 your video via pretrained Stable Diffusion2 model without training.

+ Ukiyo-e Style

+ Makoto Shinkai Style

+ Van Gogh Style

Swan ➜ Swarovski Swan

Duck ➜ Rubber Duck

Dog ➜ Robotic Dog

White Fox ➜ Gray Wolf

Cat ➜ Black Cat, Grass

Bear ➜ Lion

Cat ➜ Posche Car*

Rabbit ➜ Robot Mouse

Man ➜ Wonder Woman*

Swan ➜ White Duck

Bus ➜ GPU

+ Ukiyo-e Style

+ Makoto Shinkai Style

+ Van Gogh Style

Swan ➜ Swarovski Swan

Duck ➜ Rubber Duck

Dog ➜ Robotic Dog

White Fox ➜ Gray Wolf

Cat ➜ Black Cat, Grass

Bear ➜ Lion

Cat ➜ Posche Car*

Rabbit ➜ Robot Mouse

Man ➜ Wonder Woman*

Swan ➜ White Duck

Bus ➜ GPU

Abstract

The diffusion-based generative models have achieved remarkable success in text-based image generation. However, since it contains enormous randomness in generation progress, it is still challenging to apply such models for real-world visual content editing, especially in videos. In this paper, we propose FateZero, a zero-shot text-based editing method on real-world videos without per-prompt training or use-specific mask. To edit videos consistently, we propose several techniques based on the pre-trained models. Firstly, in contrast to the straightforward DDIM inversion technique, our approach captures intermediate attention maps during inversion using source prompt1, which effectively retain both structural and motion information. These maps are directly fused in the editing process rather than generated during denoising. To further minimize semantic leakage of the source video, we then fuse self-attentions with a blending mask obtained by cross-attention features from the source prompt. Furthermore, we have implemented a reform of the self-attention mechanism in denoising UNet by introducing spatial-temporal attention to ensure frame consistency. Yet succinct, our method is the first one to show the ability of zero-shot text-driven video style and local attribute editing from the trained text-to-image model. We also have a better zero-shot shape-aware editing ability based on the One-shot text-to-video model. Extensive experiments demonstrate our superior temporal consistency and editing capability than previous works.

Pipeline

Left, Pipeline: we store all the attention maps in the DDIM inversion pipeline. At the editing stage of the DDIM denoising, we then fuse the attention maps with the stored attention maps using the proposed Attention Blending Block.

Right, Attention Blending Block: First, we replace the cross-attention maps of un-edited words~(e.g., road and countryside) with their maps using the source prompt during inversion. As for the edited words (e.g., posche car), we blend the self-attention maps during inversion and editing with an adaptive spatial mask obtained from cross-attention, which represents the areas that the user wants to edit.

Video Style Editing Using Stable Diffusion

+ Watercolor Painting

+ Cartoon Style

+ A Pokémon Cartoon Style

+ Van Gogh Style Painting

+ Ukiyo-e Style

+ Monet Style

Video Attribute Editing Using Stable Diffusion

Squirrel, Carrot ➜ White mouse, Peanut

Squirrel, Carrot ➜ Rabbit, Eggplant

Squirrel, Carrot ➜ Robot mouse, Screwdriver

Cat ➜ A yellow cute leopard

Cat ➜ Black Cat, Grass...

Cat ➜ Red Tiger

Bear ➜ A Red Tiger

Bear ➜ A yellow leopard

Bear ➜ A yellow lion

Video Shape-Aware Editing Using Tune-A-Video Checkpoint*,3

Cat ➜ Posche Car*

Swan ➜ White Duck*

Swan ➜ Pink flamingo*

A man ➜ A Wonder Woman, With cowboy hat*

A man ➜ A Batman*

A man ➜ A Spider-Man*

Comparsions

          Ours            Tune-A-Video    Frame SDEdit    Frame Null-Inv    NLA + Null-Inv

Cat ➜ Posche Car*

          Ours            Tune-A-Video    Frame SDEdit    Frame Null-Inv    NLA + Null-Inv

+ Watercolor Painting

          Ours            Tune-A-Video    Frame SDEdit    Frame Null-Inv    NLA + Null-Inv

Swan ➜ Swarovski Swan

          Ours            Tune-A-Video    Frame SDEdit    Frame Null-Inv    NLA + Null-Inv

Swan ➜ Pink flamingo*

              Ours                    Tune-A-Video          Frame SDEdit            Frame Null-Inv

+ Ukiyo-e Style

              Ours                    Tune-A-Video          Frame SDEdit            Frame Null-Inv

+ Makoto Shinkai Style

BibTeX

@article{qi2023fatezero,
        title={FateZero: Fusing Attentions for Zero-shot Text-based Video Editing}, 
        author={Chenyang Qi and Xiaodong Cun and Yong Zhang and Chenyang Lei and Xintao Wang and Ying Shan and Qifeng Chen},
        year={2023},
        journal={arXiv:2303.09535},
}
  

Explanation

1. For better visualization, we only show the edited word in this page. Please check our paper and code for the whole source prompt.
2. Most of the results are directly edited from Stable diffusion v1.4. We use one-shot video diffusion model (Tune-A-Video) checkpoints for shape-aware editing, whose results are marked as *.
3. Our method does not require training a Tune-A-Video Model.

Acknowledgement

This project is supported by the National Key R&D Program of China under grant number 2022ZD0161501. The authors would also like to express sincere gratitude to Tencent AI Lab for providing the necessary computation resources and a conducive environment for research.