You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
There are three distinct changes in the lite brach that could be brought into the main branch, depending on further testing
a custom grad function for the FFT-based convolution (in ConvolutionRenderer) that avoids the autograd multi-step gradient in favor of the convolution with the transpose of the diff kernel.
reduction of the step size of the first fit iteration to 1/10 of its nominal value to prevent strong jumps for a well-initialized model at the very beginning when amsgrad has no previous gradient information
and effective L1 update that is expressed at the observation level, not the parameter level. This solves the problem of finding an appropriate threshold but is formally problematic because it's not a real prox.
The text was updated successfully, but these errors were encountered:
Thanks for opening this ticket, sorry that I didn't get around to it yet. You might also want to consider porting the implementation of parameters in proxmin as opposed to functions that fit them.
There are three distinct changes in the
lite
brach that could be brought into the main branch, depending on further testinggrad
function for the FFT-based convolution (inConvolutionRenderer
) that avoids theautograd
multi-step gradient in favor of the convolution with the transpose of the diff kernel.fit
iteration to 1/10 of its nominal value to prevent strong jumps for a well-initialized model at the very beginning whenamsgrad
has no previous gradient informationThe text was updated successfully, but these errors were encountered: