Skip to content

[ECIR'24] Implementation of "Large Language Models are Zero-Shot Rankers for Recommender Systems"

License

Notifications You must be signed in to change notification settings

RUCAIBox/LLMRank

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

69 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

LLMRank

LLMRank aims to investigate the capacity of LLMs that act as the ranking model for recommender systems. [paper]

Yupeng Hou†, Junjie Zhang†, Zihan Lin, Hongyu Lu, Ruobing Xie, Julian McAuley, Wayne Xin Zhao. Large Language Models are Zero-Shot Rankers for Recommender Systems. ECIR 2024.

🛍️ LLMs as Zero-Shot Rankers

We use LLMs as ranking models in an instruction-following paradigm. For each user, we first construct two natural language patterns that contain sequential interaction histories and retrieved candidate items, respectively. Then these patterns are filled into a natural language template as the final instruction. In this way, LLMs are expected to understand the instructions and output the ranking results as the instruction suggests.

🚀 Quick Start

  1. Write your own OpenAI API keys into llmrank/openai_api.yaml.
  2. Unzip dataset files.
    cd llmrank/dataset/ml-1m/; unzip ml-1m.inter.zip
    cd llmrank/dataset/Games/; unzip Games.inter.zip
    For data preparation details, please refer to [data-preparation].
  3. Install dependencies.
    pip install -r requirements.txt
  4. Evaluate ChatGPT's zero-shot ranking abilities on ML-1M dataset.
    cd llmrank/
    python evaluate.py -m Rank

🔍 Key Findings

Please click the links below each "Observation" to find the code and scripts to reproduce the results.

Observation 1. LLMs struggle to perceive order of user historie, but can be triggered to perceive the orders

LLMs can utilize historical behaviors for personalized ranking, but struggle to perceive the order of the given sequential interaction histories.

By employing specifically designed promptings, such as recency-focused prompting and in-context learning, LLMs can be triggered to perceive the order of historical user behaviors, leading to improved ranking performance.

Code is here -> [reproduction scripts]

Observation 2. Biases exist in using LLMs to rank

LLMs suffer from position bias and popularity bias while ranking, which can be alleviated by specially designed prompting or bootstrapping strategies.

Code is here -> [reproduction scripts]

Observation 3. Promising zero-shot ranking abilities

LLMs have promising zero-shot ranking abilities, ...

..., especially on candidates retrieved by multiple candidate generation models with different practical strategies.

Code is here -> [reproduction scripts]

🌟 Acknowledgement

Please cite the following paper if you find our code helpful.

@inproceedings{hou2024llmrank,
  title={Large Language Models are Zero-Shot Rankers for Recommender Systems},
  author={Yupeng Hou and Junjie Zhang and Zihan Lin and Hongyu Lu and Ruobing Xie and Julian McAuley and Wayne Xin Zhao},
  booktitle={{ECIR}},
  year={2024}
}

The experiments are conducted using the open-source recommendation library RecBole.

We use the released pre-trained models of UniSRec and VQ-Rec in our zero-shot recommendation benchmarks.

Thanks @neubig for the amazing implementation of asynchronous dispatching OpenAI APIs. [code]

About

[ECIR'24] Implementation of "Large Language Models are Zero-Shot Rankers for Recommender Systems"

Topics

Resources

License

Stars

Watchers

Forks

Languages