Skip to content
/ PPairS Public

[Pair]wise Preference [S]earch with Linear [P]robing: PPairS

License

Notifications You must be signed in to change notification settings

maiush/PPairS

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

78 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

PPairS: Pairwise Preference Search with Linear Probing

DOI License: MIT

This repository implements PPairS, introduced in Aligning Language Model Evaluators with Human Judgement, a dissertation submitted for the degree of Master of Research in Environmental Data Science.

Overview

Large Language Models (LLM's) are capable automatic evaluators, highly suited for problems in which large datasets of text samples are evaluated on a numerical or Likert scale e.g., scoring factual accuracy or the quality of generated natural language. However, LLM's are still sensitive to prompt design and exhibit biases in such a way that their judgements may be misaligned with human assessments. Pairwise Preference Search (PairS) is a search method designed to exploit LLMs' capabilities at conducting pairwise comparisons instead, in order to partially circumvent the issues with direct-scoring methods, however this approach still relies heavily on prompt design. As an alternative, we make use of concept-based interpretability methods and introduce Pairwise Preference Search with Linear Probing (PPairS), which further develops PairS through the construction of contrast pair embeddings and the use of linear probing to directly align an evaluator with human assessments. PPairS achieves state-of-the-art performance on text evaluation tasks and domain-specific problems of fact-checking and uncertainty estimation. Furthermore, it reduces the financial and computational constraints on automatic evaluation using LLM's, as it allows for peformance competitive with frontier, proprietary models using cheaper, smaller, open-weights models.

Repo Structure

PPairS/
├── src/           
│   ├── dev
│   │   ├── benchmarks/       # text evaluation benchmarks - Experiment 1
│   │   ├── sciencefeedback/  # fact checking - Experiment 2
│   │   └── climatex/         # assessing expert confidence in climate science - Experiment 3
│   └── PPairS                # core of PPairS - including prompt templates and sorting  
├── README.md                 # project overview and instructions
├── pyproject.toml            # config file for the PPairS package
└── thesis.pdf                # mres dissertation introducing this work

Data

For a replication of our results, our original data sources are:

Alternatively visit our project archive on Zenodo. This archive has the following structure:

PPairS/
├── benchmarks/
│   ├── data/                  # downloaded NEWSROOM and SummEval datasets
│   ├── prompts/               # prompts for three LLM's, two datasets - PairS (_theirs) and PPairS (_mine)
│   ├── scores/                # outputs of direct-scoring
│   ├── logits/                # outputs of logit comparison (PairS)
│   └── activations/           # contrast pair activations (PPairS)
├── sciencefeedback/
│   ├── sciencefeedback.jsonl  # all scraped and tagged claims from https://science.feedback.org/ 
│   ├── prompts/               # prompts used for direct-scoring (_score), PairS (_compare), and PPairS (_contrast)
│   ├── scores/                # outputs of direct-scoring
│   ├── logits/                # outputs of logit comparison (PairS)
│   └── activations/           # contrast pair activations (PPairS)
└── climatex/
    ├── pdf/                   # all IPCC working group and synthesis reports since the third assessment cycle
    ├── claims/                # all scraped and preprocessed claims from the above pdf's
    ├── embeddings/            # context embeddings for each above claim
    ├── topics/                # results of topic modelling for each report
    ├── prompts/               # prompts for each assessment cycle's reports
    └── activations/           # contrast pair activations (PPairS)

Installation

The main requirement for installation is Python >= 3.12. We strongly recommend a CUDA-enabled GPU for faster inference using LLM's.

Note

PPairS has not been tested thoroughly with the newly released numpy 2.0

  1. Clone the repository:
git clone https://github.com/maiush/PPairS.git
cd PPairS
  1. [Optional] Set up a Python environment e.g., using Anaconda:
conda env create -n ppairs python=3.12 -y
conda activate ppairs
  1. [Optional - for replication] Download our experiment data here:
wget https://zenodo.org/records/12627714/files/PPairS.zip
unzip -qq PPairS.zip
rm PPairS.zip
  1. Set up a path to your / our data (important):
cd PPairS
echo data_storage = <PATH_TO_YOUR_DATA> > src/dev/constants.py
  1. Install PPairS:
cd PPairS
pip install .

License

This project is licensed under the MIT License - see the LICENSE file for details.

Citation

@misc{maiya2024ppairs,
  author       = {Sharan Maiya},
  title        = {{Pairwise Preference Search with Linear Probing (PPairS)}},
  year         = 2024,
  institution  = {University of Cambridge},
  howpublished = {\url{https://github.com/maiush/PPairS}}
}

Funding

This work was support by the UKRI Centre for Doctoral Training in Application of Artificial Intelligence to the study of Environmental Risks [EP/S022961/1].

Contact

For any queries or information, contact Sharan Maiya.


About

[Pair]wise Preference [S]earch with Linear [P]robing: PPairS

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published