-
Notifications
You must be signed in to change notification settings - Fork 287
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Reworking nonnegative algorithm and solvers #542
Conversation
correcting HALS for sparse and ridge regularizations Adding balancing Adding Tucker normalization in class Updating API, with todos
Codecov ReportAttention:
Additional details and impacted files@@ Coverage Diff @@
## main #542 +/- ##
==========================================
- Coverage 87.15% 86.85% -0.30%
==========================================
Files 121 127 +6
Lines 7693 7937 +244
==========================================
+ Hits 6705 6894 +189
- Misses 988 1043 +55 ☔ View full report in Codecov by Sentry. |
@cohenjer The changes sound amazing, and went through it quickly, looks good on a high level. I know you mentioned it feels artificial to break into multiple PRs, but breaking down the PR into smaller, more manageable PRs might make reviewing easier. Docs, examples etc can be easily enough reviewed as is I think so maybe we can split novel additions first. |
@cohenjer this is outstanding! Many thanks. I agree about splitting this up. For instance, perhaps much of the (1) documentation, (2) return_errors interface, (3) MTTKRP, and (4) tl.maximum changes can be separate PRs? I'm happy to quickly review each if so. It's so easy to miss bugs otherwise... I am strongly in favor of using a consistent loss throughout (and having it be the loss, not the square root)! This inconsistency has tripped me up several times. |
Hi @JeanKossaifi @aarmey. Thank you for the kind comments, and very happy to hear you think the changes are positive! I agree the PR is too large to be reviewed efficiently; I will cut it down into several parts (possibly more than four). However, I am unsure if there is a more efficient process than copy/paste the new content manually. My commits are all over the place :/ |
One method that has helped me is to make a new branch, then |
Great suggestion @aarmey - we should have a section in the contribution doc to collate all these tips! |
Working on this right now, should be creating a few PRs soon ! |
Should we close this @cohenjer ? |
Hi @JeanKossaifi yes we can close this one, since #542 (and subsequent PRs that are queuing) cover the same contribution. |
First of all, I am sorry for the massive PR with so many small commits (to be squashed upon merge hopefully). I have been working on this for quite some time, and cutting this work into pieces seemed artificial and challenging.
Anyways I hope to make the nonnegative things in Tensorly a little cleaner and efficient with this!
Improvements for NNLS
non_negative_tucker_hals
(this is now similar tonon_negative_parafac_hals
API)tensor_norm
everywhere.Overall improvements
unfolding_dot_khatri_rao
-function is unneccesarily slow #442nnls.py
,admm.py
,proximal.py
andpenalizations.py
, this last one being mostly empty but I plan to populate it in a coming PR).Others
Pending
test_entropy.py
. I had to comment out the test with normalization activated, I have no explanation as to why that failed...