Skip to content
/ ULTS Public

ULTS: A unified and standardized library of unsupervised representation learning approaches for time series

Notifications You must be signed in to change notification settings

mqwfrog/ULTS

Repository files navigation

ULTS

A unified and standardized library of unsupervised representation learning approaches for time series

Description of this library:

ULTS is a unified and standardized library under the PyTorch framework to enable quick and convenient evaluations on unsupervised representation learning approaches for time series. ULTS integrates 17 representative models covering 2 deep clustering, 2 reconstruction-based and 13 self-supervised learning methods including 2 adversarial, 2 predictive and 9 contrastive ones. For more information, please refer to our paper: Unsupervised Representation Learning for Time Series: A Review.

Abstract

Unsupervised representation learning approaches aim to learn discriminative feature representations from unlabeled data, without the requirement of annotating every sample. Enabling unsupervised representation learning is extremely crucial for time series data, due to its unique annotation bottleneck caused by its complex characteristics and lack of visual cues compared with other data modalities. In recent years, unsupervised representation learning techniques have advanced rapidly in various domains. However, there is a lack of systematic analysis of unsupervised representation learning approaches for time series. To fill the gap, we conduct a comprehensive literature review of existing rapidly evolving unsupervised representation learning approaches for time series. Moreover, we also develop a unified and standardized library, named ULTS ({i.e., Unsupervised Learning for Time Series), to facilitate fast implementations and unified evaluations on various models. With ULTS, we empirically evaluate state-of-the-art approaches, especially the rapidly evolving contrastive learning methods, on 9 diverse real-world datasets. We further discuss practical considerations as well as open research challenges on unsupervised representation learning for time series to facilitate future research in this field.

Taxonomy:

image

Organization:

image

Models Implemented in ULTS:

1st Category 2nd Category 3rd Category Model (Official Implementations)
Deep Clustering Methods - - DeepCluster https://github.com/facebookresearch/deepcluster
- - IDFD https://github.com/TTN-YKK/Clustering_friendly_representation_learning
Reconstruction-based Methods - - TimeNet https://github.com/paudan/TimeNet
- - Deconv https://github.com/cauchyturing/Deconv_SAX
Self-supervised Learning Methods Adversarial - TimeGAN https://github.com/jsyoon0823/TimeGAN
TimeVAE https://github.com/abudesai/timeVAE
Predictive - EEG-SSL https://github.com/mlberkeley/eeg-ssl
TST https://github.com/gzerveas/mvts_transformer
Contrastive Instance-Level SimCLR https://github.com/google-research/simclr
BYOL https://github.com/deepmind/deepmind-research/tree/master/byol
CPC https://github.com/facebookresearch/CPC_audio
Prototype-Level SwAV https://github.com/facebookresearch/swav
PCL https://github.com/salesforce/PCL
MHCCL https://github.com/mqwfrog/MHCCL
Temporal-Level TS2Vec https://github.com/yuezhihan/ts2vec
TS-TCC https://github.com/emadeldeen24/TS-TCC
T-Loss https://github.com/White-Link/UnsupervisedScalableRepresentationLearningTimeSeries

Requirements for this library:

  • Python ≥ 3.6
  • PyTorch ≥ 1.4

Required packages for this library:

  • numpy
  • sklearn
  • openpyxl
  • torchvision
  • random
  • copy
  • pandas
  • matplotlib
  • time
  • collections
  • scipy
  • pynndescent
  • builtins
  • math
  • shutil
  • os
  • sys
  • warnings
  • tqdm
  • argparse
  • tensorboard_logger

Data:

  • The UCI archive includes 85 multivariate time series datasets for classification tasks. These datasets covers various application fields including audio spectra classification, business, ECG/EEG classification, human activity recognition, gas detection, motion classification, etc.
  • The UEA archive includes 30 multivariate time series datasets, covers the application fields of audio spectra classification, ECG/EEG/MEG classification, human activity recognition, motion classification, etc.
  • The MTS archive, also known as Baydogan's archive, includes 13 multivariate time series datasets, covers the application fields of audio spectra classification, ECG classification, human activity recognition, motion classification, etc.

Codes:

The codes in ULTS library are organized as follows:

  • The main.py includes the training method for all models.
  • The models folder contain all 17 unsupervised learning models.
  • The data_preprocess folder contain the codes to preprocess data from different archives.
  • The data_loader folder contains the codes to perform augmentation transformations and to load the dataset.

Running:

python main.py \
--dataset_name wisdm \
--uid SimCLR
--lr 0.03 \
--batch_size 128 \
--feature_size 128

Results:

  • The experimental results will be saved in "experiment_{args.model}_{args.dataset}" directory by default.

Citation:

If you find any of the codes helpful, kindly cite our paper.

@misc{meng2023unsupervised, 
      title={Unsupervised Representation Learning for Time Series: A Review}, 
      author={Qianwen Meng and Hangwei Qian and Yong Liu and Yonghui Xu and Zhiqi Shen and Lizhen Cui},   
      year={2023},
      eprint={2308.01578},
      archivePrefix={arXiv},
      primaryClass={cs.LG}
}

References:

Part of the codes are referenced from
https://github.com/PatrickHua/SimSiam
https://github.com/facebookresearch/deepcluster
https://github.com/TTN-YKK/Clustering_friendly_representation_learning
https://github.com/paudan/TimeNet
https://github.com/cauchyturing/Deconv_SAX
https://github.com/jsyoon0823/TimeGAN
https://github.com/abudesai/timeVAE
https://github.com/joergsimon/SSL-ECG-Paper-Reimplementaton
https://github.com/mlberkeley/eeg-ssl
https://github.com/gzerveas/mvts_transformer
https://github.com/google-research/simclr
https://github.com/deepmind/deepmind-research/tree/master/byol
https://github.com/lucidrains/byol-pytorch
https://github.com/facebookresearch/CPC_audio
https://github.com/abhinavagarwalla/swav-cifar10
https://github.com/facebookresearch/swav
https://github.com/salesforce/PCL
https://github.com/mqwfrog/MHCCL
https://github.com/yuezhihan/ts2vec
https://github.com/emadeldeen24/TS-TCC
https://github.com/White-Link/UnsupervisedScalableRepresentationLearningTimeSeries

About

ULTS: A unified and standardized library of unsupervised representation learning approaches for time series

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages