This repository contains tensorflow implementation for Knowledge Aided Consistency for Weakly Supervised Phrase Grounding in CVPR 2018.
Note: Please read the feature representation files in
feature
andannotation
directories before using the code.
Platform: Tensorflow-1.1.0 (python 2.7)
Visual features: We use Faster-RCNN pre-trained on PASCAL 2012 VOC for Flickr30K Entities, and pre-trained on ImageNet for Referit Game. Please put visual features in the feature
directory (More details can be seen in the README.md
in this directory).
Global features: We extract the global visual feature for each image in Flickr30K Entities using a pre-trained Faster-RCNN on PASCAL VOC 2012 and store them in the folder global_feat
.
Sentence features: We encode one-hot vector for each query, as well as the annotation for each query and image pair. Please put the encoded features in the annotation
directory (More details are provided in the README.md
in this directory).
File list: We generate a file list for each image in the Flickr30K Entities. If you would like to train and test on other dataset (e.g. Referit Game), please follow the similar format in the flickr_train_val.lst
and flickr_test.lst
.
Hyper parameters: Please check the Config
class in the train.py
.
Before training, we first pre-train GroundeR model (unsupervised scenario) and save the pre-trained model in the folder model/ground_unsupervised_base
(epoch 53). The implementation of GroundeR is in this repository.
For training, please enter the root folder of KAC-Net
, then type
$ python train.py -m [Model Name] -g [GPU ID] -k [knowledge]
You can choose different types of knowledge (-k
option) as KBP values: coco
and hard_coco
are for soft and hard KBP values with a Faster-RCNN pre-trained on MSCOCO respectively. pas
and hard_pas
are for soft and hard KBP values with a VGG Network pre-trained on PASCAL VOC 2012 respectively. More details can be found in the paper.
For testing, please enter the root folder of KAC-Net
, then type
$ python evaluate.py -m [Model Name] -g [GPU ID] -k [knowledge] --restore_id [Restore epoch ID]
Make sure the model name entered for evaluation is the same as the model name in training, and the epoch id exists.
If you find the repository is useful for your research, please consider citing the following work:
@inproceedings{Chen_2018_CVPR,
title={Knowledge Aided Consistency for Weakly Supervised Phrase Grounding},
author={Chen, Kan and Gao, Jiyang and Nevatia, Ram},
booktitle={CVPR},
year={2018}
}