Skip to content

Commit

Permalink
upload code
Browse files Browse the repository at this point in the history
  • Loading branch information
yiren-jian committed Oct 20, 2023
1 parent 8258b25 commit a92d99b
Show file tree
Hide file tree
Showing 530 changed files with 58,507 additions and 6 deletions.
Binary file added ._.DS_Store
Binary file not shown.
14 changes: 14 additions & 0 deletions LICENSE.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1,14 @@
BSD 3-Clause License

Copyright (c) 2022 Salesforce, Inc.
All rights reserved.

Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met:

1. Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer.

2. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution.

3. Neither the name of Salesforce.com nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission.

THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
7 changes: 7 additions & 0 deletions MANIFEST.in
Original file line number Diff line number Diff line change
@@ -0,0 +1,7 @@
recursive-include lavis/configs *.yaml *.json
recursive-include lavis/projects *.yaml *.json

recursive-exclude lavis/datasets/download_scripts *
recursive-exclude lavis/output *

include requirements.txt
55 changes: 49 additions & 6 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,12 +1,9 @@
# Bootstrapping Vision-Language Learning with Decoupled Language Pre-training


This repo covers implementations of BLIP2 + Pformer in **Bootstrapping Vision-Language Learning with Decoupled Language Pre-training** by [Yiren Jian](https://yiren-jian.github.io/), [Chongyang Gao](https://gcyzsl.github.io/) and [Soroush Vosoughi](https://www.cs.dartmouth.edu/~soroush/).
This repo covers implementations of BLIP2 with Pformer in **[Bootstrapping Vision-Language Learning with Decoupled Language Pre-training](https://arxiv.org/abs/2307.07063)**. The paper is accepted to NeurIPS 2023. The code is developed based on [LAVIS](https://github.com/salesforce/LAVIS/) project (cloned on Feb 23, 2023).

<img src="overview.png" width="800">

The code is developed based on [LAVIS](https://github.com/salesforce/LAVIS/) project (cloned on Feb 23, 2023).

We mainly add following files in `lavis/models/blip2_models`:

- [x] `blip2_pformer_opt.py`
Expand All @@ -29,7 +26,53 @@ pip install -e .
Please follow instructions from [LAVIS](https://github.com/salesforce/LAVIS/) to download pre-training datasets.

## Training
The code will be released soon.
stage 0 (training P-Former)
```bash
bash run_scripts/blip-T/train/pretrain_stage0.sh
```

stage 1 (training Q-Former with pre-trained P-Former)
```bash
bash run_scripts/blip-T/train/pretrain_stage1.sh
```

stage 2 (End-to-end BLIP2 with pre-trained P-Former)
```bash
bash run_scripts/blip-T/train/pretrain_stage2.sh
```

finetuning on MSCOCO
```bash
bash run_scripts/blip-T/train/train_caption_coco.sh
```

## Pretrained Models
- [ ] [our_stage0]()
- [ ] [our_stage1]()
- [ ] [our_stage2]()

## Evaluation
The code will be released soon.
```bash
bash run_scripts/blip-T/eval/eval_gqa_zeroshot_opt2.7b.sh
bash run_scripts/blip-T/eval/eval_okvqa_zeroshot_opt2.7b.sh
bash run_scripts/blip-T/eval/validate_vqa_zeroshot_opt2.7b.sh
bash run_scripts/blip-T/eval/eval_cap_coco_opt2.7b.sh
```

## Training and Evaluation Logs
You can find our training and evaluation logs [here](training_logs/).

## Acknowlegements
The code is developed based on [BLIP2](https://openreview.net/forum?id=KU9UojoX7U) and [LAVIS](https://github.com/salesforce/LAVIS/) project.

## Citation
```bibtex
@inproceedings{
jian2023bootstrapping,
title={Bootstrapping Vision-Language Learning with Decoupled Language Pre-training},
author = {Jian, Yiren and Gao, Chongyang and Vosoughi, Soroush},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=8Kch0ILfQH}
}
```
26 changes: 26 additions & 0 deletions app/__init__.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,26 @@
"""
# Copyright (c) 2022, salesforce.com, inc.
# All rights reserved.
# SPDX-License-Identifier: BSD-3-Clause
# For full license text, see the LICENSE file in the repo root or https://opensource.org/licenses/BSD-3-Clause
"""

from PIL import Image
import requests

import streamlit as st
import torch


@st.cache()
def load_demo_image():
img_url = (
"https://storage.googleapis.com/sfr-vision-language-research/BLIP/demo.jpg"
)
raw_image = Image.open(requests.get(img_url, stream=True).raw).convert("RGB")
return raw_image


device = torch.device("cuda" if torch.cuda.is_available() else "cpu")

cache_root = "/export/home/.cache/lavis/"
87 changes: 87 additions & 0 deletions app/calculate_coco_features.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,87 @@
"""
# Copyright (c) 2022, salesforce.com, inc.
# All rights reserved.
# SPDX-License-Identifier: BSD-3-Clause
# For full license text, see the LICENSE file in the repo root or https://opensource.org/licenses/BSD-3-Clause
"""

from PIL import Image
import requests
import torch

import os

from lavis.common.registry import registry
from lavis.processors import *
from lavis.models import *
from lavis.common.utils import build_default_model

device = torch.device("cuda" if torch.cuda.is_available() else "cpu")


def load_demo_image():
img_url = (
"https://storage.googleapis.com/sfr-vision-language-research/BLIP/demo.jpg"
)
raw_image = Image.open(requests.get(img_url, stream=True).raw).convert("RGB")

return raw_image


def read_img(filepath):
raw_image = Image.open(filepath).convert("RGB")

return raw_image


# model
model_url = "https://storage.googleapis.com/sfr-vision-language-research/BLIP/models/model_base.pth"
feature_extractor = BlipFeatureExtractor(pretrained=model_url)

feature_extractor.eval()
feature_extractor = feature_extractor.to(device)

# preprocessors
vis_processor = BlipImageEvalProcessor(image_size=224)
text_processor = BlipCaptionProcessor()

# files to process
# file_root = "/export/home/.cache/lavis/coco/images/val2014"
file_root = "/export/home/.cache/lavis/coco/images/train2014"
filepaths = os.listdir(file_root)

print(len(filepaths))

caption = "dummy"

path2feat = dict()
bsz = 256

images_in_batch = []
filepaths_in_batch = []

for i, filename in enumerate(filepaths):
if i % bsz == 0 and i > 0:
images_in_batch = torch.cat(images_in_batch, dim=0).to(device)
with torch.no_grad():
image_features = feature_extractor(
images_in_batch, caption, mode="image", normalized=True
)[:, 0]

for filepath, image_feat in zip(filepaths_in_batch, image_features):
path2feat[os.path.basename(filepath)] = image_feat.detach().cpu()

images_in_batch = []
filepaths_in_batch = []

print(len(path2feat), image_features.shape)
else:
filepath = os.path.join(file_root, filename)

image = read_img(filepath)
image = vis_processor(image).unsqueeze(0)

images_in_batch.append(image)
filepaths_in_batch.append(filepath)

torch.save(path2feat, "path2feat_coco_train2014.pth")
98 changes: 98 additions & 0 deletions app/caption.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,98 @@
"""
# Copyright (c) 2022, salesforce.com, inc.
# All rights reserved.
# SPDX-License-Identifier: BSD-3-Clause
# For full license text, see the LICENSE file in the repo root or https://opensource.org/licenses/BSD-3-Clause
"""

import streamlit as st
from app import device, load_demo_image
from app.utils import load_model_cache
from lavis.processors import load_processor
from PIL import Image


def app():
# ===== layout =====
model_type = st.sidebar.selectbox("Model:", ["BLIP_base", "BLIP_large"])

sampling_method = st.sidebar.selectbox(
"Sampling method:", ["Beam search", "Nucleus sampling"]
)

st.markdown(
"<h1 style='text-align: center;'>Image Description Generation</h1>",
unsafe_allow_html=True,
)

instructions = """Try the provided image or upload your own:"""
file = st.file_uploader(instructions)

use_beam = sampling_method == "Beam search"

col1, col2 = st.columns(2)

if file:
raw_img = Image.open(file).convert("RGB")
else:
raw_img = load_demo_image()

col1.header("Image")

w, h = raw_img.size
scaling_factor = 720 / w
resized_image = raw_img.resize((int(w * scaling_factor), int(h * scaling_factor)))

col1.image(resized_image, use_column_width=True)
col2.header("Description")

cap_button = st.button("Generate")

# ==== event ====
vis_processor = load_processor("blip_image_eval").build(image_size=384)

if cap_button:
if model_type.startswith("BLIP"):
blip_type = model_type.split("_")[1].lower()
model = load_model_cache(
"blip_caption",
model_type=f"{blip_type}_coco",
is_eval=True,
device=device,
)

img = vis_processor(raw_img).unsqueeze(0).to(device)
captions = generate_caption(
model=model, image=img, use_nucleus_sampling=not use_beam
)

col2.write("\n\n".join(captions), use_column_width=True)


def generate_caption(
model, image, use_nucleus_sampling=False, num_beams=3, max_length=40, min_length=5
):
samples = {"image": image}

captions = []
if use_nucleus_sampling:
for _ in range(5):
caption = model.generate(
samples,
use_nucleus_sampling=True,
max_length=max_length,
min_length=min_length,
top_p=0.9,
)
captions.append(caption[0])
else:
caption = model.generate(
samples,
use_nucleus_sampling=False,
num_beams=num_beams,
max_length=max_length,
min_length=min_length,
)
captions.append(caption[0])

return captions
Loading

0 comments on commit a92d99b

Please sign in to comment.