CLIPxGPT Captioner
is Image Captioning Model based on OpenAI's CLIP and GPT-2. The Model uses a Mapping module to "translate" CLIP embeddings to GPT-2. The model is trained on the Flickr30k dataset, downloaded from Kaggle
The goal of the project was to find out about the possibility of CLIP + GPT-2 connection and to check whether, with a relatively short training time and a small dataset, the model will be able to recognize situations in the pictures. In the first version, the model achieved satisfactory results.
The Model uses prefixes as in the ClipCap paper. In my original idea, the length of the prefix was 1, but after reading publication, the length of the prefix was changed to 4, thanks to which the performance increased.
The Model was trained with a frozen CLIP, a fully trained Mapping Module (6x Transformer Encoder Layers) and with partially frozen GPT-2 (the first and last 14 layers were trained).
The training process was carried out using the Kaggle P100 GPU. Training time - about 3 x 11h (150 epochs) with a linear learning rate warmup (max LR 3e-3
) and batch size 64.
Clone repository using:
git clone https://github.com/jmisilo/clip-gpt-captioning
cd clip-gpt-captioning
Create environment and install requirements:
python -m venv venv
# for windows
.\venv\Scripts\activate
# for linux/mac
source venv/bin/activate
pip install -r requirements.txt
And run prediction:
python .\src\predict.py -I <image_path>