DragonDiffusion: Enabling Drag-style Manipulation on Diffusion Models

1School of Electronic and Computer Engineering, Shenzhen Graduate School, Peking University, 2ARC Lab, Tencent PCG
teaser

DragonDiffusion enables various editing modes for the generated or real images, including object moving, object resizing, object appearance replacement, and content dragging. It is worth noting that all editing and content preservation signals come from the image itself, and the model does not require fine-tuning or additional modules.

Some Editing Results

Continuous Editing

Abstract

Despite the ability of existing large-scale text-to-image (T2I) models to generate high-quality images from detailed textual descriptions, they often lack the ability to precisely edit the generated or real images. In this paper, we propose a novel image editing method, DragonDiffusion, enabling Drag-style manipulation on Diffusion Models. Specifically, we construct classifier guidance based on the strong correspondence of intermediate features in the diffusion model. It can transform the editing signals into gradients via feature correspondence loss to modify the intermediate representation of the diffusion model. Based on this guidance strategy, we also build a multi-scale guidance to consider both semantic and geometric alignment. Moreover, a cross-branch self-attention is added to maintain the consistency between the original image and the editing result. Our method, through an efficient design, achieves various editing modes for the generated or real images, such as object moving, object resizing, object appearance replacement, and content dragging. It is worth noting that all editing and content preservation signals come from the image itself, and the model does not require fine-tuning or additional modules.

Methods

Pipeline of the proposed DragonDiffusion. Our proposed method consists of a guidance branch and a generation branch. The guidance branch provides editing and consistency guidance to the generation branch through the correspondence of intermediate features.

Results

Object Moving Results

Object Appearance Replacement Results

Content Dragging Results