Knowledge Graph Enhanced Large Language Model Editing
-
At least a GPU with no less than 48G memory is needed.
-
For the environment, run:
conda create -n glame python=3.9.7
pip install -r requirements.txt
An example for editing GPT-J with GLAME on CounterFact dataset:
python -m experiments.evaluate \
--alg_name=GLAME \
--model_name=[path/to/your/gpt-j/model] \
--hparams_fname=cf/gpt-j-6b.json \
--ds_name=cf \
--num_edits=1
Computing the covariance matrix estimation
To summarize the results of CounterFact dataset, use experiments/summarize.py
:
python -m experiments.summarize --dir_name=GLAME --runs=run_<run1>
Run summarize_port
/ summarize_mquake
for test results on CounterFactPlus and MQuAKE.
The code we conduct our experiments is based on MEMIT
.
If you find this work helpful for your research, please kindly cite it.
@misc{zhang2024knowledge,
title={Knowledge Graph Enhanced Large Language Model Editing},
author={Mengqi Zhang and Xiaotian Ye and Qiang Liu and Pengjie Ren and Shu Wu and Zhumin Chen},
year={2024},
eprint={2402.13593},
archivePrefix={arXiv},
primaryClass={cs.CL}
}