Skip to content

CLIPxGPT Captioner is Image Captioning Model based on OpenAI's CLIP and GPT-2.

License

Notifications You must be signed in to change notification settings

jmisilo/clip-gpt-captioning

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

53 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

CLIPxGPT Captioner

Description

CLIPxGPT Captioner is Image Captioning Model based on OpenAI's CLIP and GPT-2. The Model uses a Mapping module to "translate" CLIP embeddings ​​to GPT-2. The model is trained on the Flickr30k dataset, downloaded from Kaggle

The goal of the project was to find out about the possibility of CLIP + GPT-2 connection and to check whether, with a relatively short training time and a small dataset, the model will be able to recognize situations in the pictures. In the first version, the model achieved satisfactory results.

The Model uses prefixes as in the ClipCap paper. In my original idea, the length of the prefix was 1, but after reading publication, the length of the prefix was changed to 4, thanks to which the performance increased.

The Model was trained with a frozen CLIP, a fully trained Mapping Module (6x Transformer Encoder Layers) and with partially frozen GPT-2 (the first and last 14 layers were trained).

The training process was carried out using the Kaggle P100 GPU. Training time is about 2 x 11h (106 epochs) with a linearly changing learning rate (from 0 to 0.0001908) and batch size 64. Originally, the Model was supposed to be trained longer - which results in a non-standard LR. I also tried a longer training session (150 epochs), but overtraining was noticeable.

Example results

Example1

Example2

Example3

As I said, the goal was to test the Model's ability to recognize the situation. In the next phase of the experiments, I will try to improve the Model process and parameters to achieve better captions with the same dataset.

Usage

Clone repository using:

git clone https://github.com/jmisilo/clip-gpt-captioning

cd clip-gpt-captioning

Create environment and install requirements:

python -m venv venv
.\venv\Scripts\activate

pip install -r requirements.txt

And run prediction:

python .\src\predict.py -I <image_path>

References: