Skip to content

nemonameless/Emu

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

16 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Emu: An Open Multimodal Generalist

Quan Sun1*, Qiying Yu2,1*, Yufeng Cui1*, Fan Zhang1*, Xiaosong Zhang1*, Yueze Wang1, Hongcheng Gao1,
Jingjing Liu2, Tiejun Huang1,3, Xinlong Wang1

1 BAAI, 2 THU, 3 PKU
* Equal Contribution

| Paper | Demo |

Emu is a multimodal generalist that can seamlessly generate images and texts in multimodal context. Emu is trained with a unified autoregressive objective, i.e., predict-the-next-element, including both visual embeddings and textual tokens. Trained under this objective, Emu can serve as a generalist interface for both image-to-text and text-to-image tasks.

Generalist Interface

Emu serves as a generalist interface capable of diverse multimodal tasks, such as image captioning, image/video question answering, and text-to-image generation, together with new abilities like in-context text and image generation, and image blending:

Setup

Clone this repository and install required packages:

git clone https://github.com/baaivision/Emu
cd Emu

pip install -r requirements.txt

Model Weights

We release the pretrained and instruction-tuned weights of Emu. Our weights are subject to LLaMA-1's license.

Model name Weight
Emu w/ Decoder 🤗 HF link (34GB)
Emu-I 🤗 HF link (27GB)

Inference

At present, we provide inference code that can process interleaved image-text and video as input, and output text and image.

For instruction-tuned model, we provide examples for image captioning, visual question answering, and interleaved multi-image understanding:

python inference.py --instruct --ckpt-path ${INSTRUCT_CKPT_PATH}

For pretrained model, we provide an example for in-context learning:

python inference.py --ckpt-path ${PRETRAIN_CKPT_DIR}/multimodal_encoder/pytorch_model.bin

For image generation, we provide examples for image blending, text-to-image and in-context generation:

python image_inference.py --ckpt-path ${PRETRAIN_CKPT_DIR}

Schedule

We are committed to open-sourcing all Emu related materials, including:

  • The weights of Emu and Emu-I
  • Inference example for interleaved image-text as input, text as output
  • Video inference example
  • Weights of image decoder & image generation/blending example
  • YT-Storyboard-1B pretraining data
  • Pretraining code
  • Instruction tuning code
  • Evaluation code

We hope to foster the growth of our community through open-sourcing and promoting collaboration👬. Let's step towards multimodal intelligence together🍻.

Acknowledgement

We thank the great work from LLaMA, BLIP-2, Stable Diffusion, and FastChat.

Citation

If you find Emu useful for your research and applications, please consider starring this repository and citing:

@article{Emu,
  title={Generative Pretraining in Multimodality},
  author={Sun, Quan and Yu, Qiying and Cui, Yufeng and Zhang, Fan and Zhang, Xiaosong and Wang, Yueze and Gao, Hongcheng and Liu, Jingjing and Huang, Tiejun and Wang, Xinlong},
  publisher={arXiv preprint arXiv:2307.05222},
  year={2023},
}

Misc

Stargazers repo roster for @baaivision/Emu

Forkers repo roster for @baaivision/Emu

Star History Chart

About

Emu: An Open Multimodal Generalist

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 100.0%