-
Notifications
You must be signed in to change notification settings - Fork 5
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
More stable and efficient Mahalanobis distance #40
Comments
For SpectralSignatures we might want to consider |
Thanks for this! Given that this doesn't seem to be a bottleneck for now, probably shouldn't spend too much time on it, but I'm getting slightly nerdsniped. If we assume the covariance matrix is full rank, then I think torch.cholesky_solve is probably much better than I think allowing non-full rank covariance matrices (typically from fewer samples than dimensions of the activation) is kind of nice, but I'm not sure anyway whether using a pseudoinverse/ If we want to keep the current behavior, we could keep track how many samples were used to compute the covariance matrix, and then use cholesky iff |
Oh, this is of course even better than |
Oh, that question title is spot on. Shows how much research I've done... ^^' |
Note that |
Possible compromise between simplicity and speed/stability: use the cholesky method from above by default, but have a fallback using pinv if the user passes in |
Sounds like a good compromise. |
We currently use
pinv
, should probably use https://pytorch.org/docs/stable/generated/torch.linalg.lstsq.html instead (or maybe some third thing). Doesn't seem like a big issue so far thoughThe text was updated successfully, but these errors were encountered: