Skip to content

Enhancing the Locality and Breaking the Memory Bottleneck of Transformer on Time Series Forecasting (NeurIPS 2019)

Notifications You must be signed in to change notification settings

mlpotter/Transformer_Time_Series

Repository files navigation

Transformer_Time_Series

DISLCLAIMER: THIS IS NOT THE PAPERS CODE. THIS DOES NOT HAVE SPARSITY. THIS IS TEACHER FORCED LEARNING. Only tried to replicate the simple example without sparsity. Enhancing the Locality and Breaking the Memory Bottleneck of Transformer on Time Series Forecasting (NeurIPS 2019)

Able to match the results of the paper for the synthetic dataset as shown in the table below Rp

The synthetic dataset was constructed as shown below Synthetic Dataset

A nice visualization of how the attention layers look at the signal for predicting the last timestep t=t0+24-1 Attention Visualization

Learning Values (MSE) Learning Curve Validation Example

About

Enhancing the Locality and Breaking the Memory Bottleneck of Transformer on Time Series Forecasting (NeurIPS 2019)

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published