Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Few more questions on SAITS working method #39

Closed
Rajesh90123 opened this issue Jun 3, 2024 · 3 comments
Closed

Few more questions on SAITS working method #39

Rajesh90123 opened this issue Jun 3, 2024 · 3 comments

Comments

@Rajesh90123
Copy link

Rajesh90123 commented Jun 3, 2024

I am really impressed with your work: "SAITS: SELF-ATTENTION-BASED IMPUTATION FOR TIME SERIES".
I was studying your code and paper and I have few questions:

  1. Is there any particular reason for not using learning rate scheduler?
  2. Also, when I tried replicating the code, I found out that my program was running indefinitely during random search for hyperparameter tuning especially with the use of loguniform in learning rate. I didn't find any lines regarding maximum number of trials for hyperparameter tuning in your code or in your paper "SAITS: SELF-ATTENTION-BASED IMPUTATION FOR TIME SERIES". Could you please provide information regarding this?
  3. Do you change the value of artificial missing rate for MIT in training section based on missing rate at test? What I mean here is that, for example, if you have to test your model on test dataset with 80% missing rate, will the MIT missing rate be fixed to 20% as you have done in the code provided or do you train the entire model again by changing MIT missing rate manually to 80% in the training code?
  4. Do you perform hyperparameter tuning on different missing rate in validation data or do you perform hyperparameter tuning on a particular missing rate at validation dataset and save it to use it for test dataset with any missing rate? What I mean here is, running hyperparameter tuning on a validation dataset of 20% missing rate and saving the best model, and using the same best model to impute missing data for test dataset with 80% missing rate?
    Thank you, and if my questions are not clear, please comment, and I will try my best to describe my question. English is my second language, and I am not fluent on speaking English.
@Rajesh90123 Rajesh90123 changed the title Why learning rate scheduler was not used and what is the maximum number of trials for random search? Few more questions on SAITS working method Jun 3, 2024
@WenjieDu
Copy link
Owner

WenjieDu commented Jun 14, 2024

Hi Rajesh, thanks for your attention to our work!

  1. I'd like to ask, do you have any particular reason to use a scheduler? This is not an LLM, SAITS is not hard to train. But if you prefer to give it a try, you can use PyPOTS and optimizers in it support lr scheduling;
  2. No, I didn't set a maximum num of trials. I just let the experiment run til I thought it was good to stop;
    3&4. The answers are both No. I've written everything I did and found in the paper. If you didn't find the answer, you should perform experiments to do it yourself. Try it if you think it's worth a shot. BTW, here is our new paper about the masking strategies https://arxiv.org/abs/2405.17508, and it may be helpful to you.

Please kindly cite our papers in your work if you think they're useful ;-)

@Rajesh90123
Copy link
Author

Thank you so much sir for your response 🙏.

@WenjieDu
Copy link
Owner

No problem. If you work with time series, follow PyPOTS https://pypots.com closely. 🤗

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants