Skip to content

WGLab/Reversal_Curse

Repository files navigation

Reversal_Curse

This page contains three python notebooks, Reversal_data_generation, llama_union_intersectoin and bert_reversal_curse, for the paper Yang, Jingye, Da Wu, and Kai Wang. "Not All Large Language Models (LLMs) Succumb to the" Reversal Curse": A Comparative Study of Deductive Logical Reasoning in BERT and GPT Models." arXiv preprint arXiv:2312.03633 (2023).

Specifically, the Reversal_data_generation contains all the code for generating synthetic training and testing data in the paper. The bert_reversal_curse contains the code for all the training and evaluation of the BERT model, and llama_union_intersectoin contains the code for all the training and evulation of LlaMA model. These two scripts help reproduce the main results of the paper (Table 1 - 5).

Furthermore, for those inclined, these codes, especially the code for generating synthetic data, can be readily modified and employed for personalized testing and exploration. This flexibility allows interested individuals to tailor the code to their specific requirements and objectives.