From 712708100ab8793b167739629c949396db6a46e8 Mon Sep 17 00:00:00 2001
From: Wenjie Du
Date: Mon, 13 May 2024 15:19:44 +0800
Subject: [PATCH] Update docs and references (#410)
* docs: rename issue label "new model" to "new algo";
* docs: update docs and references;
---
.github/ISSUE_TEMPLATE/model-addition.yml | 2 +-
README.md | 83 +++++++------
README_zh.md | 72 +++++++-----
docs/index.rst | 22 +++-
docs/pypots.imputation.rst | 66 ++++++++++-
docs/references.bib | 137 +++++++++++++++++++---
pypots/imputation/revinscinet/model.py | 2 +-
pypots/imputation/scinet/model.py | 2 +-
8 files changed, 292 insertions(+), 94 deletions(-)
diff --git a/.github/ISSUE_TEMPLATE/model-addition.yml b/.github/ISSUE_TEMPLATE/model-addition.yml
index 6024f67a..c4ee5842 100644
--- a/.github/ISSUE_TEMPLATE/model-addition.yml
+++ b/.github/ISSUE_TEMPLATE/model-addition.yml
@@ -1,6 +1,6 @@
name: "🤙 New Model Addition"
description: Submit a request to implement a new model.
-labels: ["new model", "enhancement"]
+labels: ["new algo", "enhancement"]
body:
- type: textarea
diff --git a/README.md b/README.md
index 802fb2db..db55334b 100644
--- a/README.md
+++ b/README.md
@@ -53,7 +53,7 @@
-
+
@@ -86,7 +86,7 @@ The rest of this readme file is organized as follows:
## ❖ Available Algorithms
PyPOTS supports imputation, classification, clustering, forecasting, and anomaly detection tasks on multivariate partially-observed
time series with missing values. The table below shows the availability of each algorithm (sorted by Year) in PyPOTS for different tasks.
-The symbol ✅ indicates the algorithm is available for the corresponding task (note that models will be continuously updated
+The symbol `✅` indicates the algorithm is available for the corresponding task (note that models will be continuously updated
in the future to handle tasks that are not currently supported. Stay tuned❗️).
🌟 Since **v0.2**, all neural-network models in PyPOTS has got hyperparameter-optimization support.
@@ -94,10 +94,10 @@ This functionality is implemented with the [Microsoft NNI](https://github.com/mi
imputation survey repo [Awesome_Imputation](https://github.com/WenjieDu/Awesome_Imputation) to see how to config and
tune the hyperparameters.
-🔥 Note that Transformer, iTransformer, FreTS, Crossformer, PatchTST, DLinear, ETSformer, Pyraformer, Nonstationary Transformer, FiLM, FEDformer, Informer, Autoformer
-are not proposed as imputation methods in their original papers, and they cannot accept POTS as input.
-**To make them applicable on POTS data, we apply the embedding strategy and training approach (ORT+MIT)
-the same as we did in [SAITS paper](https://arxiv.org/pdf/2202.08516).**
+🔥 Note that all models whose name with `🧑🔧` in the table (e.g. Transformer, iTransformer, Informer etc.) are not originally
+proposed as algorithms for POTS data in their papers, and they cannot directly accept time series with missing values as input,
+let alone imputation. **To make them applicable to POTS data, we specifically apply the embedding strategy and
+training approach (ORT+MIT) the same as we did in [SAITS paper](https://arxiv.org/pdf/2202.08516).**
The task types are abbreviated as follows:
**`IMPU`**: Imputation;
@@ -107,36 +107,41 @@ The task types are abbreviated as follows:
**`ANOD`**: Anomaly Detection.
The paper references and links are all listed at the bottom of this file.
-| **Type** | **Algo** | **IMPU** | **FORE** | **CLAS** | **CLUS** | **ANOD** | **Year - Venue** |
-|:--------------|:-----------------------------------|:--------:|:--------:|:--------:|:--------:|:--------:|:-------------------|
-| Neural Net | iTransformer[^24] | ✅ | | | | | `2024 - ICLR` |
-| Neural Net | SAITS[^1] | ✅ | | | | | `2023 - ESWA` |
-| Neural Net | FreTS[^23] | ✅ | | | | | `2023 - NeurIPS` |
-| Neural Net | Crossformer[^16] | ✅ | | | | | `2023 - ICLR` |
-| Neural Net | TimesNet[^14] | ✅ | | | | | `2023 - ICLR` |
-| Neural Net | PatchTST[^18] | ✅ | | | | | `2023 - ICLR` |
-| Neural Net | ETSformer[^19] | ✅ | | | | | `2023 - ICLR` |
-| Neural Net | DLinear[^17] | ✅ | | | | | `2023 - AAAI` |
-| Neural Net | Nonstationary
Transformer[^25] | ✅ | | | | | `2022 - NeurIPS` |
-| Neural Net | FiLM[^22] | ✅ | | | | | `2022 - NeurIPS` |
-| Neural Net | Pyraformer[^26] | | | ✅ | | | `2022 - ICLR` |
-| Neural Net | Raindrop[^5] | | | ✅ | | | `2022 - ICLR` |
-| Neural Net | FEDformer[^20] | ✅ | | | | | `2022 - ICML` |
-| Neural Net | Autoformer[^15] | ✅ | | | | | `2021 - NeurIPS` |
-| Neural Net | CSDI[^12] | ✅ | ✅ | | | | `2021 - NeurIPS` |
-| Neural Net | Informer[^21] | ✅ | | | | | `2021 - AAAI` |
-| Neural Net | US-GAN[^10] | ✅ | | | | | `2021 - AAAI` |
-| Neural Net | CRLI[^6] | | | | ✅ | | `2021 - AAAI` |
-| Probabilistic | BTTF[^8] | | ✅ | | | | `2021 - TPAMI` |
-| Neural Net | GP-VAE[^11] | ✅ | | | | | `2020 - AISTATS` |
-| Neural Net | VaDER[^7] | | | | ✅ | | `2019 - GigaSci.` |
-| Neural Net | M-RNN[^9] | ✅ | | | | | `2019 - TBME` |
-| Neural Net | BRITS[^3] | ✅ | | ✅ | | | `2018 - NeurIPS` |
-| Neural Net | GRU-D[^4] | ✅ | | ✅ | | | `2018 - Sci. Rep.` |
-| Neural Net | Transformer[^2] | ✅ | | | | | `2017 - NeurIPS` |
-| Naive | LOCF/NOCB | ✅ | | | | | |
-| Naive | Mean | ✅ | | | | | |
-| Naive | Median | ✅ | | | | | |
+| **Type** | **Algo** | **IMPU** | **FORE** | **CLAS** | **CLUS** | **ANOD** | **Year - Venue** |
+|:--------------|:----------------------------|:--------:|:--------:|:--------:|:--------:|:--------:|:-------------------|
+| Neural Net | iTransformer🧑🔧[^24] | ✅ | | | | | `2024 - ICLR` |
+| Neural Net | SAITS[^1] | ✅ | | | | | `2023 - ESWA` |
+| Neural Net | FreTS🧑🔧[^23] | ✅ | | | | | `2023 - NeurIPS` |
+| Neural Net | Koopa🧑🔧[^29] | ✅ | | | | | `2023 - NeurIPS` |
+| Neural Net | Crossformer🧑🔧[^16] | ✅ | | | | | `2023 - ICLR` |
+| Neural Net | TimesNet[^14] | ✅ | | | | | `2023 - ICLR` |
+| Neural Net | PatchTST🧑🔧[^18] | ✅ | | | | | `2023 - ICLR` |
+| Neural Net | ETSformer🧑🔧[^19] | ✅ | | | | | `2023 - ICLR` |
+| Neural Net | MICN🧑🔧[^27] | ✅ | | | | | `2023 - ICLR` |
+| Neural Net | DLinear🧑🔧[^17] | ✅ | | | | | `2023 - AAAI` |
+| Neural Net | TiDE🧑🔧[^28] | ✅ | | | | | `2023 - TMLR` |
+| Neural Net | SCINet🧑🔧[^30] | ✅ | | | | | `2022 - NeurIPS` |
+| Neural Net | Nonstationary Tr.🧑🔧[^25] | ✅ | | | | | `2022 - NeurIPS` |
+| Neural Net | FiLM🧑🔧[^22] | ✅ | | | | | `2022 - NeurIPS` |
+| Neural Net | RevIN_SCInet🧑🔧[^31] | ✅ | | | | | `2022 - ICLR` |
+| Neural Net | Pyraformer🧑🔧[^26] | ✅ | | | | | `2022 - ICLR` |
+| Neural Net | Raindrop[^5] | | | ✅ | | | `2022 - ICLR` |
+| Neural Net | FEDformer🧑🔧[^20] | ✅ | | | | | `2022 - ICML` |
+| Neural Net | Autoformer🧑🔧[^15] | ✅ | | | | | `2021 - NeurIPS` |
+| Neural Net | CSDI[^12] | ✅ | ✅ | | | | `2021 - NeurIPS` |
+| Neural Net | Informer🧑🔧[^21] | ✅ | | | | | `2021 - AAAI` |
+| Neural Net | US-GAN[^10] | ✅ | | | | | `2021 - AAAI` |
+| Neural Net | CRLI[^6] | | | | ✅ | | `2021 - AAAI` |
+| Probabilistic | BTTF[^8] | | ✅ | | | | `2021 - TPAMI` |
+| Neural Net | GP-VAE[^11] | ✅ | | | | | `2020 - AISTATS` |
+| Neural Net | VaDER[^7] | | | | ✅ | | `2019 - GigaSci.` |
+| Neural Net | M-RNN[^9] | ✅ | | | | | `2019 - TBME` |
+| Neural Net | BRITS[^3] | ✅ | | ✅ | | | `2018 - NeurIPS` |
+| Neural Net | GRU-D[^4] | ✅ | | ✅ | | | `2018 - Sci. Rep.` |
+| Neural Net | Transformer🧑🔧[^2] | ✅ | | | | | `2017 - NeurIPS` |
+| Naive | LOCF/NOCB | ✅ | | | | | |
+| Naive | Mean | ✅ | | | | | |
+| Naive | Median | ✅ | | | | | |
## ❖ PyPOTS Ecosystem
@@ -356,7 +361,11 @@ PyPOTS community is open, transparent, and surely friendly. Let's work together
[^24]: Liu, Y., Hu, T., Zhang, H., Wu, H., Wang, S., Ma, L., & Long, M. (2024). [iTransformer: Inverted Transformers Are Effective for Time Series Forecasting](https://openreview.net/forum?id=JePfAI8fah). *ICLR 2024*.
[^25]: Liu, Y., Wu, H., Wang, J., & Long, M. (2022). [Non-stationary Transformers: Exploring the Stationarity in Time Series Forecasting](https://proceedings.neurips.cc/paper_files/paper/2022/hash/4054556fcaa934b0bf76da52cf4f92cb-Abstract-Conference.html). *NeurIPS 2022*.
[^26]: Liu, S., Yu, H., Liao, C., Li, J., Lin, W., Liu, A. X., & Dustdar, S. (2022). [Pyraformer: Low-Complexity Pyramidal Attention for Long-Range Time Series Modeling and Forecasting](https://openreview.net/forum?id=0EXmFzUn5I). *ICLR 2022*.
-
+[^27]: Wang, H., Peng, J., Huang, F., Wang, J., Chen, J., & Xiao, Y. (2023). [MICN: Multi-scale Local and Global Context Modeling for Long-term Series Forecasting](https://openreview.net/forum?id=zt53IDUR1U). *ICLR 2023*.
+[^28]: Das, A., Kong, W., Leach, A., Mathur, S., Sen, R., & Yu, R. (2023). [Long-term Forecasting with TiDE: Time-series Dense Encoder](https://openreview.net/forum?id=pCbC3aQB5W). *TMLR 2023*.
+[^29]: Liu, Y., Li, C., Wang, J., & Long, M. (2023). [Koopa: Learning Non-stationary Time Series Dynamics with Koopman Predictors](https://proceedings.neurips.cc/paper_files/paper/2023/hash/28b3dc0970fa4624a63278a4268de997-Abstract-Conference.html). *NeurIPS 2023*.
+[^30]: Liu, M., Zeng, A., Chen, M., Xu, Z., Lai, Q., Ma, L., & Xu, Q. (2022). [SCINet: Time Series Modeling and Forecasting with Sample Convolution and Interaction](https://proceedings.neurips.cc/paper_files/paper/2022/hash/266983d0949aed78a16fa4782237dea7-Abstract-Conference.html). *NeurIPS 2022*.
+[^31]: Kim, T., Kim, J., Tae, Y., Park, C., Choi, J. H., & Choo, J. (2022). [Reversible Instance Normalization for Accurate Time-Series Forecasting against Distribution Shift](https://openreview.net/forum?id=cGDAkQo1C0p). *ICLR 2022*.
diff --git a/README_zh.md b/README_zh.md
index 59159a3f..98ecc5e8 100644
--- a/README_zh.md
+++ b/README_zh.md
@@ -53,7 +53,7 @@
-
+
@@ -83,46 +83,51 @@
## ❖ 支持的算法
PyPOTS当前支持多变量POTS数据的插补,预测,分类,聚类以及异常检测五类任务。下表描述了当前PyPOTS中所集成的算法以及对应不同任务的可用性。
-符号 ✅ 表示该算法当前可用于相应的任务(注意,目前模型尚不支持的任务在未来版本中可能会逐步添加,敬请关注!)。
+符号`✅`表示该算法当前可用于相应的任务(注意,目前模型尚不支持的任务在未来版本中可能会逐步添加,敬请关注!)。
算法的参考文献以及论文链接在该文档底部可以找到。
🌟 自**v0.2**版本开始, PyPOTS中所有神经网络模型都支持超参数调优。该功能基于[微软的NNI](https://github.com/microsoft/nni)框架实现。
你可以通过参考我们的时间序列插补综述项目的代码[Awesome_Imputation](https://github.com/WenjieDu/Awesome_Imputation)来了解如何使用PyPOTS调优模型的超参。
-🔥 请注意: Transformer, iTransformer, FreTS, Crossformer, PatchTST, DLinear, ETSformer, Pyraformer, Nonstationary Transformer, FiLM, FEDformer, Informer, Autoformer
-模型在它们的原始论文中并未用作插补方法,因此这些模型的输入中不能带有缺失值, 所以无法接受POTS数据作为输入。
+🔥 请注意: 表格中名称带有`🧑🔧`的模型(例如Transformer, iTransformer, Informer等)在它们的原始论文中并非作为可以处理POTS数据的算法提出,
+所以这些模型的输入中不能带有缺失值,无法接受POTS数据作为输入,更加不是插补算法。
**为了使上述模型能够适用于POTS数据,我们采用了与[SAITS论文](https://arxiv.org/pdf/2202.08516)中相同的embedding策略和训练方法(ORT+MIT)对它们进行改进**。
| **类型** | **算法** | **插补** | **预测** | **分类** | **聚类** | **异常检测** | **年份 - 刊物** |
|:--------------|:-----------------------------------|:------:|:------:|:------:|:------:|:--------:|:-----------------|
-| Neural Net | iTransformer[^24] | ✅ | | | | | `2024 - ICLR` |
-| Neural Net | SAITS[^1] | ✅ | | | | | `2023 - ESWA` |
-| Neural Net | FreTS[^23] | ✅ | | | | | `2023 - NeurIPS` |
-| Neural Net | Crossformer[^16] | ✅ | | | | | `2023 - ICLR` |
-| Neural Net | TimesNet[^14] | ✅ | | | | | `2023 - ICLR` |
-| Neural Net | PatchTST[^18] | ✅ | | | | | `2023 - ICLR` |
-| Neural Net | ETSformer[^19] | ✅ | | | | | `2023 - ICLR` |
-| Neural Net | DLinear[^17] | ✅ | | | | | `2023 - AAAI` |
-| Neural Net | Nonstationary
Transformer[^25] | ✅ | | | | | `2022 - NeurIPS` |
-| Neural Net | FiLM[^22] | ✅ | | | | | `2022 - NeurIPS` |
-| Neural Net | Pyraformer[^26] | | | ✅ | | | `2022 - ICLR` |
-| Neural Net | Raindrop[^5] | | | ✅ | | | `2022 - ICLR` |
-| Neural Net | FEDformer[^20] | ✅ | | | | | `2022 - ICML` |
-| Neural Net | Autoformer[^15] | ✅ | | | | | `2021 - NeurIPS` |
-| Neural Net | CSDI[^12] | ✅ | ✅ | | | | `2021 - NeurIPS` |
-| Neural Net | Informer[^21] | ✅ | | | | | `2021 - AAAI` |
-| Neural Net | US-GAN[^10] | ✅ | | | | | `2021 - AAAI` |
-| Neural Net | CRLI[^6] | | | | ✅ | | `2021 - AAAI` |
-| Probabilistic | BTTF[^8] | | ✅ | | | | `2021 - TPAMI` |
-| Neural Net | GP-VAE[^11] | ✅ | | | | | `2020 - AISTATS` |
-| Neural Net | VaDER[^7] | | | | ✅ | | `2019 - GigaSci.` |
-| Neural Net | M-RNN[^9] | ✅ | | | | | `2019 - TBME` |
-| Neural Net | BRITS[^3] | ✅ | | ✅ | | | `2018 - NeurIPS` |
-| Neural Net | GRU-D[^4] | ✅ | | ✅ | | | `2018 - Sci. Rep.` |
-| Neural Net | Transformer[^2] | ✅ | | | | | `2017 - NeurIPS` |
-| Naive | LOCF/NOCB | ✅ | | | | | |
-| Naive | Mean | ✅ | | | | | |
-| Naive | Median | ✅ | | | | | |
+| Neural Net | iTransformer🧑🔧[^24] | ✅ | | | | | `2024 - ICLR` |
+| Neural Net | SAITS[^1] | ✅ | | | | | `2023 - ESWA` |
+| Neural Net | FreTS🧑🔧[^23] | ✅ | | | | | `2023 - NeurIPS` |
+| Neural Net | Koopa🧑🔧[^29] | ✅ | | | | | `2023 - NeurIPS` |
+| Neural Net | Crossformer🧑🔧[^16] | ✅ | | | | | `2023 - ICLR` |
+| Neural Net | TimesNet[^14] | ✅ | | | | | `2023 - ICLR` |
+| Neural Net | PatchTST🧑🔧[^18] | ✅ | | | | | `2023 - ICLR` |
+| Neural Net | ETSformer🧑🔧[^19] | ✅ | | | | | `2023 - ICLR` |
+| Neural Net | MICN🧑🔧[^27] | ✅ | | | | | `2023 - ICLR` |
+| Neural Net | DLinear🧑🔧[^17] | ✅ | | | | | `2023 - AAAI` |
+| Neural Net | TiDE🧑🔧[^28] | ✅ | | | | | `2023 - TMLR` |
+| Neural Net | SCINet🧑🔧[^30] | ✅ | | | | | `2022 - NeurIPS` |
+| Neural Net | Nonstationary Tr.🧑🔧[^25] | ✅ | | | | | `2022 - NeurIPS` |
+| Neural Net | FiLM🧑🔧[^22] | ✅ | | | | | `2022 - NeurIPS` |
+| Neural Net | RevIN_SCInet🧑🔧[^31] | ✅ | | | | | `2022 - ICLR` |
+| Neural Net | Pyraformer🧑🔧[^26] | ✅ | | | | | `2022 - ICLR` |
+| Neural Net | Raindrop[^5] | | | ✅ | | | `2022 - ICLR` |
+| Neural Net | FEDformer🧑🔧[^20] | ✅ | | | | | `2022 - ICML` |
+| Neural Net | Autoformer🧑🔧[^15] | ✅ | | | | | `2021 - NeurIPS` |
+| Neural Net | CSDI[^12] | ✅ | ✅ | | | | `2021 - NeurIPS` |
+| Neural Net | Informer🧑🔧[^21] | ✅ | | | | | `2021 - AAAI` |
+| Neural Net | US-GAN[^10] | ✅ | | | | | `2021 - AAAI` |
+| Neural Net | CRLI[^6] | | | | ✅ | | `2021 - AAAI` |
+| Probabilistic | BTTF[^8] | | ✅ | | | | `2021 - TPAMI` |
+| Neural Net | GP-VAE[^11] | ✅ | | | | | `2020 - AISTATS` |
+| Neural Net | VaDER[^7] | | | | ✅ | | `2019 - GigaSci.` |
+| Neural Net | M-RNN[^9] | ✅ | | | | | `2019 - TBME` |
+| Neural Net | BRITS[^3] | ✅ | | ✅ | | | `2018 - NeurIPS` |
+| Neural Net | GRU-D[^4] | ✅ | | ✅ | | | `2018 - Sci. Rep.` |
+| Neural Net | Transformer🧑🔧[^2] | ✅ | | | | | `2017 - NeurIPS` |
+| Naive | LOCF/NOCB | ✅ | | | | | |
+| Naive | Mean | ✅ | | | | | |
+| Naive | Median | ✅ | | | | | |
## ❖ PyPOTS生态系统
@@ -331,6 +336,9 @@ PyPOTS社区是一个开放、透明、友好的社区,让我们共同努力
[^24]: Liu, Y., Hu, T., Zhang, H., Wu, H., Wang, S., Ma, L., & Long, M. (2024). [iTransformer: Inverted Transformers Are Effective for Time Series Forecasting](https://openreview.net/forum?id=JePfAI8fah). *ICLR 2024*.
[^25]: Liu, Y., Wu, H., Wang, J., & Long, M. (2022). [Non-stationary Transformers: Exploring the Stationarity in Time Series Forecasting](https://proceedings.neurips.cc/paper_files/paper/2022/hash/4054556fcaa934b0bf76da52cf4f92cb-Abstract-Conference.html). *NeurIPS 2022*.
[^26]: Liu, S., Yu, H., Liao, C., Li, J., Lin, W., Liu, A. X., & Dustdar, S. (2022). [Pyraformer: Low-Complexity Pyramidal Attention for Long-Range Time Series Modeling and Forecasting](https://openreview.net/forum?id=0EXmFzUn5I). *ICLR 2022*.
+[^27]: Wang, H., Peng, J., Huang, F., Wang, J., Chen, J., & Xiao, Y. (2023). [MICN: Multi-scale Local and Global Context Modeling for Long-term Series Forecasting](https://openreview.net/forum?id=zt53IDUR1U). *ICLR 2023*.
+[^28]: Das, A., Kong, W., Leach, A., Mathur, S., Sen, R., & Yu, R. (2023). [Long-term Forecasting with TiDE: Time-series Dense Encoder](https://openreview.net/forum?id=pCbC3aQB5W). *TMLR 2023*.
+[^29]: Liu, Y., Li, C., Wang, J., & Long, M. (2023). [Koopa: Learning Non-stationary Time Series Dynamics with Koopman Predictors](https://proceedings.neurips.cc/paper_files/paper/2023/hash/28b3dc0970fa4624a63278a4268de997-Abstract-Conference.html). *NeurIPS 2023*.
diff --git a/docs/index.rst b/docs/index.rst
index 57c0d8dc..c2b5101c 100644
--- a/docs/index.rst
+++ b/docs/index.rst
@@ -72,7 +72,7 @@ Welcome to PyPOTS docs!
:alt: arXiv DOI
:target: https://arxiv.org/abs/2305.18811
-.. image:: https://img.shields.io/badge/README-%F0%9F%87%A8%F0%9F%87%B3中文版-FCEFE8
+.. image:: https://pypots.com/figs/pypots_logos/readme/CN.svg
:alt: README in Chinese
:target: https://github.com/WenjieDu/PyPOTS/blob/main/README_zh.md
@@ -117,9 +117,9 @@ This functionality is implemented with the `Microsoft NNI `_ to see how to config and
tune the hyperparameters.
-🔥 Note that Transformer, iTransformer, FreTS, Crossformer, PatchTST, DLinear, ETSformer, Pyraformer, Nonstationary Transformer, FiLM, FEDformer, Informer, Autoformer
-are not proposed as imputation methods in their original papers, and they cannot accept POTS as input.
-To make them applicable on POTS data, we apply the embedding strategy and training approach (ORT+MIT)
+🔥 Note that all models whose name with `🧑🔧` in the table (e.g. Transformer, iTransformer, Informer etc.) are not originally
+proposed as algorithms for POTS data in their papers, and they cannot directly accept time series with missing values as input, let alone imputation.
+To make them applicable to POTS data, we specifically apply the embedding strategy and training approach (ORT+MIT)
the same as we did in `SAITS paper `_.
The task types are abbreviated as follows: **IMPU**: Imputation; **FORE**: Forecasting;
@@ -135,6 +135,8 @@ The paper references are all listed at the bottom of this readme file.
+----------------+-----------------------------------------------------------+------+------+------+------+------+-----------------------+
| Neural Net | FreTS :cite:`yi2023frets` | ✅ | | | | | ``2023 - NeurIPS`` |
+----------------+-----------------------------------------------------------+------+------+------+------+------+-----------------------+
+| Neural Net | Koopa :cite:`liu2023koopa` | ✅ | | | | | ``2023 - NeurIPS`` |
++----------------+-----------------------------------------------------------+------+------+------+------+------+-----------------------+
| Neural Net | Crossformer :cite:`nie2023patchtst` | ✅ | | | | | ``2023 - ICLR`` |
+----------------+-----------------------------------------------------------+------+------+------+------+------+-----------------------+
| Neural Net | TimesNet :cite:`wu2023timesnet` | ✅ | | | | | ``2023 - ICLR`` |
@@ -143,11 +145,19 @@ The paper references are all listed at the bottom of this readme file.
+----------------+-----------------------------------------------------------+------+------+------+------+------+-----------------------+
| Neural Net | ETSformer :cite:`woo2023etsformer` | ✅ | | | | | ``2023 - ICLR`` |
+----------------+-----------------------------------------------------------+------+------+------+------+------+-----------------------+
+| Neural Net | MICN :cite:`wang2023micn` | ✅ | | | | | ``2023 - ICLR`` |
++----------------+-----------------------------------------------------------+------+------+------+------+------+-----------------------+
| Neural Net | DLinear :cite:`zeng2023dlinear` | ✅ | | | | | ``2023 - AAAI`` |
+----------------+-----------------------------------------------------------+------+------+------+------+------+-----------------------+
-| Neural Net | Nonstationary Tran. :cite:`liu2022nonstationary` | ✅ | | | | | ``2023 - NeurIPS`` |
+| Neural Net | TiDE :cite:`das2023tide` | ✅ | | | | | ``2023 - TMLR`` |
++----------------+-----------------------------------------------------------+------+------+------+------+------+-----------------------+
+| Neural Net | SCINet :cite:`liu2022scinet` | ✅ | | | | | ``2022 - NeurIPS`` |
++----------------+-----------------------------------------------------------+------+------+------+------+------+-----------------------+
+| Neural Net | Nonstationary Tran. :cite:`liu2022nonstationary` | ✅ | | | | | ``2022 - NeurIPS`` |
++----------------+-----------------------------------------------------------+------+------+------+------+------+-----------------------+
+| Neural Net | FiLM :cite:`zhou2022film` | ✅ | | | | | ``2022 - NeurIPS`` |
+----------------+-----------------------------------------------------------+------+------+------+------+------+-----------------------+
-| Neural Net | FiLM :cite:`zhou2022film` | ✅ | | | | | ``2023 - NeurIPS`` |
+| Neural Net | RevIN_SCInet :cite:`kim2022revin` | ✅ | | | | | ``2022 - ICLR`` |
+----------------+-----------------------------------------------------------+------+------+------+------+------+-----------------------+
| Neural Net | Pyraformer :cite:`liu2022pyraformer` | ✅ | | | | | ``2022 - ICLR`` |
+----------------+-----------------------------------------------------------+------+------+------+------+------+-----------------------+
diff --git a/docs/pypots.imputation.rst b/docs/pypots.imputation.rst
index 37b25519..3b0b935e 100644
--- a/docs/pypots.imputation.rst
+++ b/docs/pypots.imputation.rst
@@ -28,6 +28,15 @@ pypots.imputation.itransformer
:show-inheritance:
:inherited-members:
+pypots.imputation.koopa
+------------------------------------
+
+.. automodule:: pypots.imputation.koopa
+ :members:
+ :undoc-members:
+ :show-inheritance:
+ :inherited-members:
+
pypots.imputation.frets
------------------------------
@@ -64,6 +73,23 @@ pypots.imputation.patchtst
:show-inheritance:
:inherited-members:
+pypots.imputation.etsformer
+------------------------------
+
+.. automodule:: pypots.imputation.etsformer
+ :members:
+ :undoc-members:
+ :show-inheritance:
+ :inherited-members:
+
+pypots.imputation.micn
+------------------------------
+.. automodule:: pypots.imputation.micn
+ :members:
+ :undoc-members:
+ :show-inheritance:
+ :inherited-members:
+
pypots.imputation.dlinear
------------------------------
@@ -73,10 +99,28 @@ pypots.imputation.dlinear
:show-inheritance:
:inherited-members:
-pypots.imputation.etsformer
+pypots.imputation.tide
------------------------------
-.. automodule:: pypots.imputation.etsformer
+.. automodule:: pypots.imputation.tide
+ :members:
+ :undoc-members:
+ :show-inheritance:
+ :inherited-members:
+
+pypots.imputation.scinet
+------------------------------
+
+.. automodule:: pypots.imputation.scinet
+ :members:
+ :undoc-members:
+ :show-inheritance:
+ :inherited-members:
+
+pypots.imputation.nonstationary_transformer
+------------------------------
+
+.. automodule:: pypots.imputation.nonstationary_transformer
:members:
:undoc-members:
:show-inheritance:
@@ -91,6 +135,24 @@ pypots.imputation.film
:show-inheritance:
:inherited-members:
+pypots.imputation.revin_scinet
+------------------------------
+
+.. automodule:: pypots.imputation.film
+ :members:
+ :undoc-members:
+ :show-inheritance:
+ :inherited-members:
+
+pypots.imputation.pyraformer
+------------------------------
+
+.. automodule:: pypots.imputation.pyraformer
+ :members:
+ :undoc-members:
+ :show-inheritance:
+ :inherited-members:
+
pypots.imputation.fedformer
------------------------------
diff --git a/docs/references.bib b/docs/references.bib
index 2c07158a..def627d1 100644
--- a/docs/references.bib
+++ b/docs/references.bib
@@ -355,20 +355,6 @@ @article{wu2015TimeSeries
url = {https://eudl.eu/doi/10.4108/icst.iniscom.2015.258269}
}
-@article{yoon2017EstimatingMissing,
-title = {Estimating {{Missing Data}} in {{Temporal Data Streams Using Multi-directional Recurrent Neural Networks}}},
-author = {Yoon, Jinsung and Zame, William R. and {van der Schaar}, Mihaela},
-year = {2017},
-month = nov,
-journal = {arXiv:1711.08742 [cs]},
-eprint = {1711.08742},
-eprinttype = {arxiv},
-primaryclass = {cs},
-url = {http://arxiv.org/abs/1711.08742},
-archiveprefix = {arXiv},
-keywords = {Computer Science - Machine Learning}
-}
-
@article{yuan2019E2GAN,
title = {{{E}}{$^{2}$}{{GAN}}: {{End-to-End Generative Adversarial Network}} for {{Multivariate Time Series Imputation}}},
author = {Yuan, Xiaojie and Luo, Yonghong and Zhang, Ying and Cai, Xiangrui},
@@ -616,3 +602,126 @@ @article{das2023tide
year={2023},
url={https://openreview.net/forum?id=pCbC3aQB5W},
}
+
+@inproceedings{chen2023contiformer,
+title={ContiFormer: Continuous-Time Transformer for Irregular Time Series Modeling},
+author={Yuqi Chen and Kan Ren and Yansen Wang and Yuchen Fang and Weiwei Sun and Dongsheng Li},
+booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
+year={2023},
+url={https://openreview.net/forum?id=YJDz4F2AZu}
+}
+
+@inproceedings{lee2024pits,
+title={Learning to Embed Time Series Patches Independently},
+author={Seunghan Lee and Taeyoung Park and Kibok Lee},
+booktitle={The Twelfth International Conference on Learning Representations},
+year={2024},
+url={https://openreview.net/forum?id=WS7GuBDFa2}
+}
+
+@inproceedings{wang2023micn,
+title={{MICN}: Multi-scale Local and Global Context Modeling for Long-term Series Forecasting},
+author={Huiqiang Wang and Jian Peng and Feihu Huang and Jince Wang and Junhui Chen and Yifei Xiao},
+booktitle={The Eleventh International Conference on Learning Representations},
+year={2023},
+url={https://openreview.net/forum?id=zt53IDUR1U}
+}
+
+@inproceedings{wang2024timemixer,
+title={TimeMixer: Decomposable Multiscale Mixing for Time Series Forecasting},
+author={Shiyu Wang and Haixu Wu and Xiaoming Shi and Tengge Hu and Huakun Luo and Lintao Ma and James Y. Zhang and JUN ZHOU},
+booktitle={The Twelfth International Conference on Learning Representations},
+year={2024},
+url={https://openreview.net/forum?id=7oLshfEIC2}
+}
+
+@article{gu2023mamba,
+title={Mamba: Linear-Time Sequence Modeling with Selective State Spaces},
+author={Gu, Albert and Dao, Tri},
+journal={arXiv preprint arXiv:2312.00752},
+year={2023}
+}
+
+@article{zhang2022lightts,
+title={Less Is More: Fast Multivariate Time Series Forecasting with Light Sampling-oriented MLP Structures},
+author={Tianping Zhang and Yizhuo Zhang and Wei Cao and Jiang Bian and Xiaohan Yi and Shun Zheng and Jian Li},
+year={2022},
+eprint={2207.01186},
+archivePrefix={arXiv},
+primaryClass={cs.LG}
+}
+
+@article{lin2023segrnn,
+title={{SegRNN}: Segment Recurrent Neural Network for Long-Term Time Series Forecasting},
+author={Shengsheng Lin and Weiwei Lin and Wentai Wu and Feiyu Zhao and Ruichao Mo and Haotong Zhang},
+year={2023},
+eprint={2308.11200},
+archivePrefix={arXiv},
+primaryClass={cs.LG}
+}
+
+@article{chen2023tsmixer,
+title={{TSMixer}: An All-MLP Architecture for Time Series Forecasting},
+author={Si-An Chen and Chun-Liang Li and Nate Yoder and Sercan O. Arik and Tomas Pfister},
+year={2023},
+eprint={2303.06053},
+archivePrefix={arXiv},
+primaryClass={cs.LG}
+}
+
+@inproceedings{choi2024timecib,
+title={Conditional Information Bottleneck Approach for Time Series Imputation},
+author={MinGyu Choi and Changhee Lee},
+booktitle={The Twelfth International Conference on Learning Representations},
+year={2024},
+url={https://openreview.net/forum?id=K1mcPiDdOJ}
+}
+
+@article{gao2024units,
+title={{UniTS}: Building a Unified Time Series Model},
+author={Gao, Shanghua and Koker, Teddy and Queen, Owen and Hartvigsen, Thomas and Tsiligkaridis, Theodoros and Zitnik, Marinka},
+journal={arXiv},
+url={https://arxiv.org/pdf/2403.00131.pdf},
+year={2024}
+}
+
+@article{liu2024timesurl,
+title={{TimesURL}: Self-Supervised Contrastive Learning for Universal Time Series Representation Learning},
+author={Liu, Jiexi and Chen, Songcan},
+volume={38},
+url={https://ojs.aaai.org/index.php/AAAI/article/view/29299},
+DOI={10.1609/aaai.v38i12.29299},
+number={12},
+journal={Proceedings of the AAAI Conference on Artificial Intelligence},
+year={2024},
+month={Mar.},
+pages={13918-13926},
+}
+
+@inproceedings{luo2024moderntcn,
+title={Modern{TCN}: A Modern Pure Convolution Structure for General Time Series Analysis},
+author={Luo Donghao, Wue Xue},
+booktitle={The Twelfth International Conference on Learning Representations},
+year={2024},
+url={https://openreview.net/forum?id=vpJMJerXHU}
+}
+
+@inproceedings{liu2022scinet,
+author = {LIU, Minhao and Zeng, Ailing and Chen, Muxi and Xu, Zhijian and LAI, Qiuxia and Ma, Lingna and Xu, Qiang},
+booktitle = {Advances in Neural Information Processing Systems},
+editor = {S. Koyejo and S. Mohamed and A. Agarwal and D. Belgrave and K. Cho and A. Oh},
+pages = {5816--5828},
+publisher = {Curran Associates, Inc.},
+title = {SCINet: Time Series Modeling and Forecasting with Sample Convolution and Interaction},
+url = {https://proceedings.neurips.cc/paper_files/paper/2022/file/266983d0949aed78a16fa4782237dea7-Paper-Conference.pdf},
+volume = {35},
+year = {2022}
+}
+
+@inproceedings{kim2022revin,
+title={Reversible Instance Normalization for Accurate Time-Series Forecasting against Distribution Shift},
+author={Taesung Kim and Jinhee Kim and Yunwon Tae and Cheonbok Park and Jang-Ho Choi and Jaegul Choo},
+booktitle={International Conference on Learning Representations},
+year={2022},
+url={https://openreview.net/forum?id=cGDAkQo1C0p}
+}
diff --git a/pypots/imputation/revinscinet/model.py b/pypots/imputation/revinscinet/model.py
index e5f03e6d..cd63982e 100644
--- a/pypots/imputation/revinscinet/model.py
+++ b/pypots/imputation/revinscinet/model.py
@@ -23,7 +23,7 @@
class RevIN_SCINet(BaseNNImputer):
"""The PyTorch implementation of the RevIN_SCINet model.
- RevIN_SCINet is originally proposed by et al. in :cite:`wu2021autoformer`.
+ RevIN_SCINet is originally proposed by Kim et al. in :cite:`kim2022revin`.
Parameters
----------
diff --git a/pypots/imputation/scinet/model.py b/pypots/imputation/scinet/model.py
index 884bda45..667e05f8 100644
--- a/pypots/imputation/scinet/model.py
+++ b/pypots/imputation/scinet/model.py
@@ -23,7 +23,7 @@
class SCINet(BaseNNImputer):
"""The PyTorch implementation of the SCINet model.
- SCINet is originally proposed by Wu et al. in :cite:`wu2021autoformer`.
+ SCINet is originally proposed by Liu et al. in :cite:`liu2022scinet`.
Parameters
----------