Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Solved a problem similar to Exception: Reached maximum number of idle transformation calls #130

Open
wants to merge 3 commits into
base: master
Choose a base branch
from

Conversation

LemonCANDY42
Copy link

@LemonCANDY42 LemonCANDY42 commented Mar 18, 2023

  1. According to the solution of @jaheba in Exception: Reached maximum number of idle transformation calls awslabs/gluonts#2694, an optional parameter max_idle_transforms is added to TimeGradEstimator. I guess it can solve problems like Fixes Exception: Reached maximum number of idle transformation calls #127 & Fixes Exception: Reached maximum number of idle transformation calls. #117.
  2. Fixed module 'numpy' has no attribute 'long' problem.
  3. I found that after gluonts V0.10.X, in this commit awslabs/gluonts@4126386, the freq parameter has been removed, so I modified this part to avoid the error: TypeError: PyTorchPredictor .init() got an unexpected keyword argument 'freq'. this Fixes TypeError: __init__() got an unexpected keyword argument 'freq' #118.
  4. According to the problem fixed above, the version of gluonts used is 0.12.4.

@LemonCANDY42
Copy link
Author

3. I found that after gluonts V0.10.X, in this commit awslabs/gluonts@4126386, the freq parameter has been removed, so I modified this part to avoid the error: TypeError: PyTorchPredictor .init() got an unexpected keyword argument 'freq'. this Fixes TypeError: init() got an unexpected keyword argument 'freq' #118.

But for the third point, can someone tell me why the freq parameter was removed in gluonts?Any clarification would be greatly appreciated

@stathius
Copy link

@LemonCANDY42 Thanks for that. Wish I had seen this PR before. I did 2/3 of the fixes myself (plus another one about the dataset) and was going to open a PR. Could the authors please merge this? @kashif

@kashif
Copy link
Collaborator

kashif commented Mar 30, 2023

@stathius ok let me check... do we need to change the notebook?

@@ -135,7 +135,7 @@ def create_transformation(self) -> Transformation:
AsNumpyArray(
field=FieldName.FEAT_STATIC_CAT,
expected_ndim=1,
dtype=np.long,
dtype=np.int_,
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
dtype=np.int_,
dtype=int,

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@stathius ok let me check... do we need to change the notebook?

The notebook seems to run fine.

@kashif
Copy link
Collaborator

kashif commented Mar 30, 2023

i am also fixing things up in the 0.7.0 branch if you want to have a look there

) -> None:
super().__init__(lead_time=lead_time)
self.trainer = trainer
self.dtype = dtype
self.max_idle_transforms = kwargs["max_idle_transforms"] if "max_idle_transforms" in kwargs else None
Copy link

@stathius stathius Mar 30, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is by no means wrong but it seems to me that newer versions of gluonts seem to handle this using the env variable. If so might be better to stick with it for better compatibility. @kashif

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ok yes if you can peek into the 0.7.0 branch, you can also see i have merged the implementation of deepAR and deepVAR as they differ in the output side and the vanilla transformer also works for both univariate and multivariate...

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks a lot for pointing me to the 0.7.0 branch, really good to know you're activelty working on this. Will have a more thorough look. I realize you're now using the pytorch-lightning trainer (was entertaining doing that).

@c247274901
Copy link

when I try dpvar,I have the same problem:Reached maximum number of idle transformation calls.

@jbgao
Copy link

jbgao commented Jul 18, 2023

I think the best way to resolve the issue of "Reached maximum number of idle transformation calls" is to provide a larger number to "num_instances" of ExpectedNumInstanceSampler. At the moment for this example in DeepAREstimator definition, this was fixed to num_instances = 1.0. I think it is better to allow users to provide an appropriate value to this parameters in e.g. DeepAREstimator. When this parameter is small like 1.0 while the time series to very long (say 13K), then the probability to get samples will be 1/13K.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
5 participants