Skip to content

The code of our paper "Entity-related Unsupervised Pretraining with Visual Prompts for Multimodal Aspect-based Sentiment Analysis"

Notifications You must be signed in to change notification settings

lkh-meredith/Entity-related-Unsupervised-Pretraining-with-Visual-Prompts-for-MABSA

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

21 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Entity-related-Unsupervised-Pretraining-with-Visual-Prompts-for-MABSA

The code of our paper "Entity-related Unsupervised Pretraining with Visual Prompts for Multimodal Aspect-based Sentiment Analysis"

Data Download

The MABSA dataset can be derived from the paper: Vision-Language Pre-Training for Multimodal Aspect-Based Sentiment Analysis (https://github.com/NUSTM/VLP-MABSA)

The pre-training dataset can download from the COCO2014: https://cocodataset.org/

The split_coco.py is used to split COCO2014 for pre-training.

Data pre-process

We use clip-vit-base-patch16 to extract image feature.

parse_coco.py and parse_twitter.py is used to pre-process data.

Model backbone

We use flan-t5-base and t5-base to initialize our model.

About

The code of our paper "Entity-related Unsupervised Pretraining with Visual Prompts for Multimodal Aspect-based Sentiment Analysis"

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Languages