MS-Diffusion: Multi-subject Zero-shot Image Personalization with Layout Guidance
Paper
•
2406.07209
•
Published
Our research introduces the MS-Diffusion framework for layout-guided zero-shot image personalization with multi-subjects. This innovative approach integrates grounding tokens with the feature resampler to maintain detail fidelity among subjects. With the layout guidance, MS-Diffusion further improves the cross-attention to adapt to the multi-subject inputs, ensuring that each subject condition acts on specific areas. The proposed multi-subject cross-attention orchestrates harmonious inter-subject compositions while preserving the control of texts.
Download the pretrained base models from SDXL-base-1.0 and CLIP-G.
Please refer to our GitHub repository to prepare the environment and get detailed instructions on how to run the model.
scale parameter is used to determine the extent of image control. For default, the scale is set to 0.6. In practice, the scale of 0.4 would be better if your input contains subjects needing to effect on the whole image, such as the background. Feel free to adjust the scale in your applications.