-
Notifications
You must be signed in to change notification settings - Fork 93
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Issue in compute_jackson_cheby_coeff #74
Comments
Thanks for reporting this. @mdeff do you know why this definition was used? I remember we had a big discussion about the definition when working on a student project together as it was not matching the definition of chebynet. |
Yes the first coefficient is doubled in the PyGSP. In the ML code I wrote I did not double it indeed. The code in the PyGSP is a straight translation from matlab, which also doubles the first coefficient. Do you see any reason to double it @nperraud? Did you write the matlab code? I was going to not double the first coefficient in my overhaul of the Chebyshev approximation in the This choice doesn't impact the user as long as it's consistent (between the code that computes the coefficients and the one that applies them). We should check what other implementations do. |
The Chebyshef implementation of the gspbox was taken from the code of the spectral wavelet paper from 2012. I do not know why this two is there and it might even be a mistake. I believe, it was intentionally done so, but I have no clue why. |
Thanks for the historical pointer. :) I don't think there's a mistake. It's a matter of definition. A factor of two must appear for the first coefficient. The coefficient is divided by 2 either when used (current PyGSP behavior) either when computed (the ML code, because the coefficient is learned not computed). I have a slight preference for the second. But is there a generally accepted definition? |
For reference here is the spectral graph wavelet toolbox: https://wiki.epfl.ch/sgwt Now I have not gone deeply to know why they chose this definition. I think we "simply" need to reference properly the definition we choose to use. |
Agreed.
Agreed. As numpy went without the factor 2 in the evaluation (i.e., like the ML code, unlike the current pygsp), I think we should follow this convention for consistency (and least surprise to users). Maybe even use numpy to compute the coefficients and evaluate the polynomial. |
Agreed, let us be compatible with numpy |
It seems to have to be corrected from
ch[0] = (2/(np.pi))*(np.arccos(filter_bounds[0])-np.arccos(filter_bounds[1]))
to
ch[0] = (1/(np.pi))*(np.arccos(filter_bounds[0])-np.arccos(filter_bounds[1]))
.The text was updated successfully, but these errors were encountered: