Skip to content
/ Neon Public

[AAAI'23] Unsupervised Explanation Generation via Correct Instantiations

Notifications You must be signed in to change notification settings

Shark-NLP/Neon

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

6 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Neon

Official Codebase for Unsupervised Explanation Generation via Correct Instantiations.

Please feel free to contact us with any questions or advices: [email protected].

Framework

In this paper, We propose a two-phase framework NEON to help large PLMs generate explanations by implicitly identifying conflict points in the statement.

Figure 1: The framework of Neon.

Main Results

Table 1: Automatic evaluation results.

Table 2: Human evaluation results.

Dependencies

The code is mainly based on PyTorch and Huggingface transformers.

  • Python==3.9.10
  • pytorch==1.10.2
  • transformers==4.21.0

Datasets

Our main datasets are ComVE and e-SNLI. You need to download them and put them under the data folder. The detailed structure of files can be found in data/ComVE/README.md and data/e-SNLI/README.md, respectively.

In some sections, we conduct simple strategies to process our datasets, such as filtering and shuffling. These scripts are omitted, but you can easily reproduce them by yourself.

Main Experiments

Phase1

Conduct in-context learning in the phase I:

cd ./Phase1/in_context/
sh run_phase1.sh

Conduct CGMH (unsupervised method) in the phase I:

cd ./Phase1/CGMH/
sh run.sh

Phase2

cd ./Phase2/
sh run_phase2.sh

Evaluation

cd ./Evaluation/
sh run_eval.sh

Analysis

We put the python-script XXX.py and its corresponding sh-script XXX.sh in the XXX section. You can directly run the XXX.sh for each analysis experiment to validate. We take the first analysis experiment Quality as an example:

cd ./Analysis/Quality/
sh run_binary.sh

Citations

Please add this citation if our paper or code helps you :)

@article{cheng2022unsupervised,
  title={Unsupervised Explanation Generation via Correct Instantiations},
  author={Cheng, Sijie and Wu, Zhiyong and Chen, Jiangjie and Li, Zhixing and Liu, Yang and Kong, Lingpeng},
  journal={arXiv preprint arXiv:2211.11160},
  year={2022}
}

About

[AAAI'23] Unsupervised Explanation Generation via Correct Instantiations

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published