Data, Code and Model for the paper "Summary-Source Proposition-level Alignment: Task, Datasets and Supervised Baseline".
If you find the code useful, please cite the following paper.
@inproceedings{ernst-etal-2021-summary,
title = "Summary-Source Proposition-level Alignment: Task, Datasets and Supervised Baseline",
author = "Ernst, Ori and Shapira, Ori and Pasunuru, Ramakanth and Lepioshkin, Michael and Goldberger, Jacob and Bansal, Mohit and Dagan, Ido", booktitle = "Proceedings of the 25th Conference on Computational Natural Language Learning", month = nov, year = "2021", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2021.conll-1.25", pages = "310--322",}
You can use our huggingface model or check our demo here.
run_glue.py
script was forked from huggingface v2.5.1, and edited for our purpose.
supervised_oie_wrapper
directory is a wrapper over AllenNLP's (v0.9.0) pretrained Open IE model that was implemented by Gabriel Stanovsky. It was forked from here, and edited for our purpose.
In this repository we used python-3.6. Please refer to requirements.txt
for other requirements.
All manual datasets are under manual_datasets
repository, including crowdsourced dev and test sets, and Pyramid-based train set.
As DUC-based datasets are limited to LDC agreement, we provide here only the character index of all propositions or sentences.
So, if you have the original dataset, you can regenerate the alignments easily.
If you have any issue regarding the DUC alignment regeneration, please contact via email.
In addition, we are trying to upload our alignment datasets to LDC, so it will not have agreement issues. Will be updated soon.
MultiNews alignments are released in full.
Predicted alignments of MultiNews and CNN/DailyMail train and val datasets can be found here.
To generate derived datasets (salience, clustering and generation) out of an alignment file use:
python createSubDatasets.py -alignments_path <ALIGNMENTS_PATH> -out_dir_path <OUT_DIR_PATH>
To apply aligment model on your own data, follow the following steps:
-
download the trained model here and put it under
/transformers/examples/out/outnewMRPC_OIU/SpansOieNegativeAll_pan_full089_fixed/checkpoint-2000/
-
run
python main_predict.py -data_path <DATA_PATH> -output_path <OUT_DIR_PATH> -alignment_model_path <ALIGNMENT_MODEL_PATH>
<DATA_PATH>
should contain the following structure where a summary and its related document directory share the same name:
- <DATA_PATH>
- summaries
- A.txt
- B.txt
- ...
- A
- doc_A1
- doc_A2
- ...
- B
- doc_B1
- doc_B2
- ...
- It will create two files in
<OUT_DIR_PATH>
: - 'dev.tsv' - contains all alignment candidate pairs. - a '.csv' file - contains all predicted aligned pairs with their classification score.