[2024.07.01] - Inference code is now available.
[2024.07.01] - Hugging Face Online demo is available here!
[2024.06.30] - Our Online demo is available here!
- Release online demo
- Release our latest checkpoint
- Release model and trainning code
- Support JoyType in ComfyUI
- Release our research paper
The Figure introduces the whole framework of our method, including data collection, training pipeline, and inference pipeline. In the data collection phase, we leveraged the open-source CapOnImage2M dataset, selecting a subset of 1M images. For each selected image, we employed a Visual Language Model (e.g., CogVLM) to generate textual descriptions, thereby obtaining prompts associated with the images. We applied the canny algorithm to extract edges from text regions within the images, creating a canny map. The training pipeline comprises three primary components: the latent diffusion module, the Font ControlNet module, and the loss design module. More precisely, during training, the raw image, canny map, and prompt are fed into the Variational Autoencoder (VAE), Font ControlNet, and text encoder, respectively. The loss function is bifurcated into two segments: the latent space and the pixel space. Within the latent space, we utilize the loss function
# Initial a conda enviroment
conda create -n joytype python=3.9
conda activate joytype
# Clone joytype repo
git clone ...
cd JoyType
# Install requirements
pip install -r requirements.txt
[Recommend]: We already released a demo on JDHealth and HuggingFace!
you can run with this code to infer:
python infer.py --prompt "a card" --input_yaml examples/test.yaml --img_name test
- prompt corresponds to the text description of the image you want to generate
- input_yaml corresponds to the information of texts' layout in the generated image
- img_name corresponds to the file name of the generated image
You can see more arguments by:
python infer.py --help
Please note that the model will be pulled from Hugging Face by default, if you want to load it locally, please pre-download the model from here and modify the argument: --load_path.