Interactive Data Synthesis for Systematic Vision Adaptation via LLMs-AIGCs Collaboration
Qifan Yu, Juncheng Li, Wentao Ye, Siliang Tang, and Yueting Zhuang
Zhejiang Univerisity
This project is under construction and we will have all the code ready soon.
GPT-4 can do anything even in visual tasks——Label anything and Generate anything just all in one pipeline.
Make it easier for users to turn their ideas into accurate images. Generate whatever you think! (a small baby of DALLE 3)
We release our technical report(🔥NEW)
We train the ChatGPT with low cost and can generate semantically rich prompts for AIGC models creating fantastic images. Even given short words (room), our pipeline imagines vivid scene descriptions and generates the most matched fine-grained images.
Automatic Prompts for AIGC models:
- A room with Nordic-style decoration typically features a clean and minimalist design, with a focus on functionality and simplicity. The color scheme is often light and muted, with shades of white, beige, and gray dominating the palette, creating a sense of calm and tranquility. The furniture is typically made of light-colored wood, with clean lines and simple shapes, and may include iconic Nordic pieces such as a Wegner chair or a Poul Henningsen lamp. Decorative items such as cozy blankets, natural materials like wool or fur, or plants add a touch of warmth and texture to the room. Lighting is often used to create a soft and inviting atmosphere, with natural light streaming in through large windows or artificial light provided by Nordic-inspired fixtures. Overall, a room with Nordic-style decoration creates a sense of simplicity, harmony, and coziness, with a focus on comfort and functionality.
We teach ChatGPT as an assistant to help us imagine various scenes with different backgrounds based on the simple sentence 'A white dog sits on wooden bench.' and generate much data for down-stream tasks by the help of AIGC models.(🔥NEW)
Using stable diffusion to generate and annotate bounding boxes and masks for object detection and segmentation just in one-pipeline!
LLM is a data specialist based on AIGC models.
- ChatGPT acts as an educator to guide AIGC models to generate a variety of controllable images in various scenarios
- Generally, given a raw image from the website or AIGC, SAM generated the masked region for the source image and GroundingDINO generated the open-set detection results just in one step. Then, we filter overlap bounding boxes and obtain non-ambiguity annotations.
- Mixture text prompt and clip model to select the region by similarity scores, which can be finally used to generate the target edited image with stable-diffusion-inpaint pipeline.
- Highlight features:
- Pretrained ControlNet with SAM mask as condition enables the image generation with fine-grained control.
- category-unrelated SAM mask enables more forms of editing and generation.
- ChatGPT self-chatting enables text guidance-free control for magic image generation in various scenarios.
- High-resolution images and high-quality annotations effectively enhance large-scale datasets.
- download visual foundation models
# Segment Anything
wget https://dl.fbaipublicfiles.com/segment_anything/sam_vit_h_4b8939.pth
# GroundingDINO
wget https://github.com/IDEA-Research/GroundingDINO/releases/download/v0.1.0-alpha2/groundingdino_swinb_cogcoor.pth
# blended model for foreobjects editing
mkdir -p blended_latent_diffusion/models/ldm/text2img-large/
wget -O blended_latent_diffusion/models/ldm/text2img-large/blend_model.ckpt https://ommer-lab.com/files/latent-diffusion/nitro/txt2img-f8-large/model.ckpt
- initialize the label anything pipeline
bash annotation.sh
- load AIGC models for generation in edit pipeline and initialize the controllable editing
bash conditional_edit.sh
- Config Explaination
- label_word(label_word_path): contain the input label word from the user
- mode: 'object' or 'scene', the former focuses on object-centric image generation while the latter focuses on complex scene image generation
- label word:
person, beach, surfboard
- High quality description prompt automatically generated:
A couple enjoys a relaxing day at the beach with the man walking together with the woman, holding a big surfboard. The serene scene is complete with the sound of waves and the warm sun and there are many people lying on the beach.
- Generated images in magic scenarios:
- Specific category of object in an image~(only given 'human face')
- Total annotations with category sets
- ChatGPT chat for AIGC model
- Label segmentation masks and detection bounding boxes
- Annotate segmentation and detection for Conditional Diffusion Demo
- Using Grounding DINO and Segment Anything for category-specific labelling.
- Interactive control on different masks for existing image editing and generated image editing.
- ChatGPT guided Controllable Image Editing.
[2] https://github.com/huggingface/diffusers
[3] https://github.com/facebookresearch/segment-anything
[4] https://github.com/IDEA-Research/Grounded-Segment-Anything/
If you find this work useful for your research, please cite our paper and star our git repo:
@misc{yu2023interactive,
title={Interactive Data Synthesis for Systematic Vision Adaptation via LLMs-AIGCs Collaboration},
author={Qifan Yu and Juncheng Li and Wentao Ye and Siliang Tang and Yueting Zhuang},
year={2023},
eprint={2305.12799},
archivePrefix={arXiv},
primaryClass={cs.CV}
}