python train_caption.py
cd ..
cd MasaCtrl/
python eval_masa_w_captioning_hugging.py
python eval_masa2.py
cd ..
python eval_masa_w_4_interpolate7.py
python visual_total.py
python visual_total2.py
python visual_total3.py
cd ..
python eval_masa_w_4_interpolate.py
python blip2_eval_masa_w_1_captioning.py
python blip2_eval_masa_w_2_captioning_template.py
python blip2_eval_masa_w_3_summarize.py
python blip2_eval_masa_w_4_interpolate.py
python eval_masa.py
cd ..
python eval_masa_w_1_captioning.py
python eval_masa_w_2_captioning_template.py
python eval_masa_w_3_summarize.py
python eval_masa_w_4_interpolate.py
cd ..
python eval_masa.py
python 2eval_masa_w_1_captioning.py
python 2eval_masa_w_2_captioning_template.py
python 2eval_masa_w_3_summarize.py
python 2eval_masa_w_4_interpolate.py
cd ..
python blip2_interpolate-opt-2.7b0.6.py
python blip2_interpolate-opt-2.7b0.7.py
python blip2_interpolate-opt-2.7b0.8.py
python blip2_interpolate-opt-2.7b0.6qa.py
python blip2_interpolate-opt-2.7b0.7qa.py
python blip2_interpolate-opt-2.7b0.8qa.py
python blip2_interpolate-opt-2.7b0.6masa3.py
python blip2_interpolate-opt-2.7b0.7masa3.py
python blip2_interpolate-opt-2.7b0.8masa3.py
python blip2_interpolate-opt-2.7b0.6qamasa3.py
python blip2_interpolate-opt-2.7b0.7qamasa3.py
python blip2_interpolate-opt-2.7b0.8qamasa3.py
cd ..
python blip2_interpolate0.7.py
python blip2_interpolate0.8.py
python blip2_interpolate0.6qa.py
python blip2_interpolate0.7qa.py
python blip2_interpolate0.8qa.py
python blip2_interpolate0.6masa3.py
python blip2_interpolate0.7masa3.py
python blip2_interpolate0.8masa3.py
python blip2_interpolate0.6qamasa3.py
python blip2_interpolate0.7qamasa3.py
python blip2_interpolate0.8qamasa3.py
cd ..
python blip2_interpolate0.6backis.py
python blip2_interpolate0.7backis.py
python blip2_interpolate0.8backis.py
python blip2_interpolate0.6masa3backis.py
python blip2_interpolate0.7masa3backis.py
python blip2_interpolate0.8masa3backis.py
cd ..
BLIP: Bootstrapping Language-Image Pre-training for Unified Vision-Language Understanding and Generation
Announcement: BLIP is now officially integrated into LAVIS - a one-stop library for language-and-vision research and applications!
This is the PyTorch code of the BLIP paper [blog]. The code has been tested on PyTorch 1.10. To install the dependencies, run
pip install -r requirements.txt
Catalog:
- Inference demo
- Pre-trained and finetuned checkpoints
- Finetuning code for Image-Text Retrieval, Image Captioning, VQA, and NLVR2
- Pre-training code
- Zero-shot video-text retrieval
- Download of bootstrapped pre-training datasets
Run our interactive demo using Colab notebook (no GPU needed). The demo includes code for:
- Image captioning
- Open-ended visual question answering
- Multimodal / unimodal feature extraction
- Image-text matching
Try out the Web demo, integrated into Huggingface Spaces 🤗 using Gradio.
Replicate web demo and Docker image is also available at
Num. pre-train images | BLIP w/ ViT-B | BLIP w/ ViT-B and CapFilt-L | BLIP w/ ViT-L |
---|---|---|---|
14M | Download | - | - |
129M | Download | Download | Download |
Task | BLIP w/ ViT-B | BLIP w/ ViT-B and CapFilt-L | BLIP w/ ViT-L |
---|---|---|---|
Image-Text Retrieval (COCO) | Download | - | Download |
Image-Text Retrieval (Flickr30k) | Download | - | Download |
Image Captioning (COCO) | - | Download | Download |
VQA | Download | Download | - |
NLVR2 | Download | - | - |
- Download COCO and Flickr30k datasets from the original websites, and set 'image_root' in configs/retrieval_{dataset}.yaml accordingly.
- To evaluate the finetuned BLIP model on COCO, run:
python -m torch.distributed.run --nproc_per_node=8 train_retrieval.py \ --config ./configs/retrieval_coco.yaml \ --output_dir output/retrieval_coco \ --evaluate
- To finetune the pre-trained checkpoint using 8 A100 GPUs, first set 'pretrained' in configs/retrieval_coco.yaml as "https://storage.googleapis.com/sfr-vision-language-research/BLIP/models/model_base.pth". Then run:
python -m torch.distributed.run --nproc_per_node=8 train_retrieval.py \ --config ./configs/retrieval_coco.yaml \ --output_dir output/retrieval_coco
- Download COCO and NoCaps datasets from the original websites, and set 'image_root' in configs/caption_coco.yaml and configs/nocaps.yaml accordingly.
- To evaluate the finetuned BLIP model on COCO, run:
python -m torch.distributed.run --nproc_per_node=8 train_caption.py --evaluate
- To evaluate the finetuned BLIP model on NoCaps, generate results with: (evaluation needs to be performed on official server)
python -m torch.distributed.run --nproc_per_node=8 eval_nocaps.py
- To finetune the pre-trained checkpoint using 8 A100 GPUs, first set 'pretrained' in configs/caption_coco.yaml as "https://storage.googleapis.com/sfr-vision-language-research/BLIP/models/model_base_capfilt_large.pth". Then run:
python -m torch.distributed.run --nproc_per_node=8 train_caption.py
- Download VQA v2 dataset and Visual Genome dataset from the original websites, and set 'vqa_root' and 'vg_root' in configs/vqa.yaml.
- To evaluate the finetuned BLIP model, generate results with: (evaluation needs to be performed on official server)
python -m torch.distributed.run --nproc_per_node=8 train_vqa.py --evaluate
- To finetune the pre-trained checkpoint using 16 A100 GPUs, first set 'pretrained' in configs/vqa.yaml as "https://storage.googleapis.com/sfr-vision-language-research/BLIP/models/model_base_capfilt_large.pth". Then run:
python -m torch.distributed.run --nproc_per_node=16 train_vqa.py
- Download NLVR2 dataset from the original websites, and set 'image_root' in configs/nlvr.yaml.
- To evaluate the finetuned BLIP model, run
python -m torch.distributed.run --nproc_per_node=8 train_nlvr.py --evaluate
- To finetune the pre-trained checkpoint using 16 A100 GPUs, first set 'pretrained' in configs/nlvr.yaml as "https://storage.googleapis.com/sfr-vision-language-research/BLIP/models/model_base.pth". Then run:
python -m torch.distributed.run --nproc_per_node=16 train_nlvr.py
In order to finetune a model with ViT-L, simply change the config file to set 'vit' as large. Batch size and learning rate may also need to be adjusted accordingly (please see the paper's appendix for hyper-parameter details). Gradient checkpoint can also be activated in the config file to reduce GPU memory usage.
- Prepare training json files where each json file contains a list. Each item in the list is a dictonary with two key-value pairs: {'image': path_of_image, 'caption': text_of_image}.
- In configs/pretrain.yaml, set 'train_file' as the paths for the json files .
- Pre-train the model using 8 A100 GPUs:
python -m torch.distributed.run --nproc_per_node=8 pretrain.py --config ./configs/Pretrain.yaml --output_dir output/Pretrain
- Download MSRVTT dataset following the instructions from https://github.com/salesforce/ALPRO, and set 'video_root' accordingly in configs/retrieval_msrvtt.yaml.
- Install decord with
pip install decord
- To perform zero-shot evaluation, run
python -m torch.distributed.run --nproc_per_node=8 eval_retrieval_video.py
We provide bootstrapped pre-training datasets as json files. Each json file contains a list. Each item in the list is a dictonary with two key-value pairs: {'url': url_of_image, 'caption': text_of_image}.
Image source | Filtered web caption | Filtered synthetic caption by ViT-B | Filtered synthetic caption by ViT-L |
---|---|---|---|
CC3M+CC12M+SBU | Download | Download | Download |
LAION115M | Download | Download | Download |
If you find this code to be useful for your research, please consider citing.
@inproceedings{li2022blip, title={BLIP: Bootstrapping Language-Image Pre-training for Unified Vision-Language Understanding and Generation}, author={Junnan Li and Dongxu Li and Caiming Xiong and Steven Hoi}, year={2022}, booktitle={ICML}, }
The implementation of BLIP relies on resources from ALBEF, Huggingface Transformers, and timm. We thank the original authors for their open-sourcing.