Despite the ability of existing large-scale text-to-image (T2I) models to generate high-quality images from detailed textual descriptions, they often lack the ability to precisely edit the generated or real images. In this paper, we propose a novel image editing method, DragonDiffusion, enabling Drag-style manipulation on Diffusion Models. Specifically, we construct classifier guidance based on the strong correspondence of intermediate features in the diffusion model. It can transform the editing signals into gradients via feature correspondence loss to modify the intermediate representation of the diffusion model. Based on this guidance strategy, we also build a multi-scale guidance to consider both semantic and geometric alignment. Moreover, a cross-branch self-attention is added to maintain the consistency between the original image and the editing result. Our method, through an efficient design, achieves various editing modes for the generated or real images, such as object moving, object resizing, object appearance replacement, and content dragging. It is worth noting that all editing and content preservation signals come from the image itself, and the model does not require fine-tuning or additional modules.
Pipeline of the proposed DragonDiffusion. Our proposed method consists of a guidance branch and a generation branch. The guidance branch provides editing and consistency guidance to the generation branch through the correspondence of intermediate features.
[1] Drag Your GAN: Interactive Point-based Manipulation on the Generative Image Manifold
[2] DragDiffusion: Harnessing Diffusion Models for Interactive Point-based Image Editing
[3] Emergent Correspondence from Image Diffusion
[4] Diffusion Self-Guidance for Controllable Image Generation