- 2024.5: We have updated the Steam dataset to a new version, in which we've addressed an issue that led to the repetition of certain data in the last interacted item of sequence.
- 🔥 2024.3: Our paper is accepted by SIGIR'24! Thank all Collaborators! 🎉🎉
- 🔥 2024.3: Our datasets and checkpoints are released on the huggingface.
-
Prepare the environment:
git clone https://github.com/ljy0ustc/LLaRA.git cd LLaRA pip install -r requirements.txt
-
Prepare the pre-trained huggingface model of LLaMA2-7B (https://huggingface.co/meta-llama/Llama-2-7b-hf).
-
Download the data and checkpoints.
-
Prepare the data and checkpoints:
Put the data to the dir path
data/ref/
and the checkpoints to the dir pathcheckpoints/
.
Train LLaRA with a single A100 GPU on MovieLens dataset:
sh train_movielens.sh
Train LLaRA with a single A100 GPU on Steam dataset:
sh train_steam.sh
Train LLaRA with a single A100 GPU on LastFM dataset:
sh train_lastfm.sh
Note that: set the llm_path
argument with your own directory path of the Llama2 model.
Test LLaRA with a single A100 GPU on MovieLens dataset:
sh test_movielens.sh
Test LLaRA with a single A100 GPU on Steam dataset:
sh test_steam.sh
Test LLaRA with a single A100 GPU on LastFM dataset:
sh test_lastfm.sh