Skip to content
/ MELT Public

Official PyTorch implementation Source code for MELT: Mutual Enhancement of Long-Tailed User and Item for Sequential Recommendation, accepted at SIGIR 2023

Notifications You must be signed in to change notification settings

rlqja1107/MELT

Repository files navigation

MELT: Mutual Enhancement of Long-Tailed User and Item for Sequential Recommendation

The official source code for MELT: Mutual Enhancement of Long-Tailed User and Item for Sequential Recommendation paper, accepted at SIGIR(full paper) 2023.

Overview

Overall Framework

Abstract

The long-tailed problem is a long-standing challenge in Sequential Recommender Systems (SRS) in which the problem exists in terms of both users and items. While many existing studies address the long-tailed problem in SRS, they only focus on either the user or item perspective. However, we discover that the long-tailed user and item problems exist at the same time, and considering only either one of them leads to sub-optimal performance of the other one. In this paper, we propose a novel framework for SRS, called Mutual Enhancement of Long-Tailed user and item (MELT), that jointly alleviates the long-tailed problem in the perspectives of both users and items. MELT consists of bilateral branches each of which is responsible for long-tailed users and items, respectively, and the branches are trained to mutually enhance each other, which is trained effectively by a curriculum learning-based training. MELT is model-agnostic in that it can be seamlessly integrated with existing SRS models. Extensive experiments on eight datasets demonstrate the benefit of alleviating the long-tailed problems in terms of both users and items even without sacrificing the performance of head users and items, which has not been achieved by existing methods. To the best of our knowledge, MELT is the first work that jointly alleviates the long-tailed user and item problems in SRS.

Data Preprocess

1. Download the raw datasets (i.e., ratings only) in the following links

  • Amazon: Download the raw datasets instead of "5-core" datasets.

  • Behance : You can download the "Behance_appreciate_1M.gz" in the data explorer.

  • Foursquare

2. Then, put the raw datasets in the raw_dataset directory

3. Preprocess the datasets using preprocess.ipynb file

We follow the same-preprocessing strategy with SASRec.

Library Versions

  • Python: 3.9.12
  • Pytorch : 1.10
  • Numpy: 1.21.2
  • Pandas: 1.3.4

We upload the environment.yaml file to directly install the required packages.

conda env create --file environment.yaml

Training

1. Prepare the trained backbone (e.g., SASRec, FMLP) model.

We share the pretrained backbone encoder.
You can donwload the pretrained model and put it on save_model/{DATA_NAME} directory.

Or you can explicitly train the SASRec or FMLP model with the following commands.

SASRec

# In the shell code, please change the 'data' variable 
bash shell/SASRec/train_amazon.sh # Dataset: Clothing, Sports, Beauty, Grocery, Automotive, Music

FMLP

# In the shell code, please change the 'data' variable 
bash shell/FMLP/train_amazon.sh # Dataset: Clothing, Sports, Beauty, Grocery, Automotive, Music 

2. Train the MELT framework.

In the shell code, please change the hyper-parameters referred to Hyperparameter Setting. For example, is equal to lamb_u, is equal to lamb_i, abd is equal to e_max.

MELT+SASRec

# In the shell code, please change the 'data' variable 
bash shell/MELT_SASRec/train_amazon.sh # Dataset: Clothing, Sports, Beauty, Grocery, Automotive, Music 

MELT+FMLP

# In the shell code, please change the 'data' variable 
bash shell/MELT_FMLP/train_amazon.sh # Dataset: Clothing, Sports, Beauty, Grocery, Automotive, Music 

Additionally, we provide the trained MELT model for each dataset. Following the below link, you can download them.

When you download the MELT's pre-trained model, put it on save_model/{DATA_NAME} directory.

To train the model on Behance or Foursquare datasets, please run the train_others.sh shell code.

Inference

SASRec

# In the shell code, please change the 'data' variable 
bash shell/SASRec/test_amazon.sh # Dataset: Clothing, Sports, Beauty, Grocery, Automotive, Music 

MELT-SASRec

# In the shell code, please change the 'data' variable 
bash shell/MELT_SASRec/test_amazon.sh # Dataset: Clothing, Sports, Beauty, Grocery, Automotive, Music 

To test the model on Behance or Foursquare datasets, please run the test_others.sh shell code.

Algorithm

To better understand the MELT framework, we provide the algorithm.

Hyperparameter

  • MELT+SASRec
Data lamb_U lamb_I e_max Pareto(%) - a
Music 0.2 0.3 180 20
Automotive 0.1 0.4 180 20
Beauty 0.1 0.4 180 20
Sports 0.4 0.3 200 20
Clothing 0.3 0.2 200 20
Grocery 0.1 0.3 180 20
Foursquare 0.1 0.1 200 50
Behance 0.1 0.2 180 50
  • MELT+FMLP
Data Lamb_U Lamb_I E_max Pareto(%) - a
Music 0.1 0.1 200 20
Automotive 0.3 0.3 160 20
Beauty 0.2 0.1 200 20
Sports 0.2 0.3 180 20
Clothing 0.4 0.3 180 20
Grocery 0.2 0.2 200 20
Foursquare 0.1 0.1 200 50
Behance 0.1 0.3 200 50

Citation

@inproceedings{kim2023melt,
  title={MELT: Mutual Enhancement of Long-Tailed User and Item for Sequential Recommendation},
  author={Kim, Kibum and Hyun, Dongmin and Yun, Sukwon and Park, Chanyoung},
  booktitle={Proceedings of the 46th international ACM SIGIR conference on Research and development in information retrieval},
  pages={68--77},
  year={2023}
}

About

Official PyTorch implementation Source code for MELT: Mutual Enhancement of Long-Tailed User and Item for Sequential Recommendation, accepted at SIGIR 2023

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published