-
Notifications
You must be signed in to change notification settings - Fork 2.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Initial implementation of np.linalg.lstsq() via SVD #2744
Conversation
CI failure looks like a conda failure. Can someone restart it? |
nvm, once I logged-in to Travis I was able to restart. |
Uh oh, I forgot about #2200 :/ This is completely my bad. I'm really sorry for the repeated work, and for accidentally ignoring the work you put in @joaogui1 . As JAX activity has picked up (especially inside Alphabet) we've gotten a lot worse at following up on PRs from amazing OSS contributors. (OSS contributions are especially amazing because contributors don't have the benefit of our extremely-active internal chat channels.) We're trying to address the general problem. As of this week we're experimenting with a GitHub rotation. It's tough, though, because the JAX team is pretty small. I'm optimistic that we'll get better over time. As for this specific case: @jakevdp @joaogui1 is there a way to combine efforts here and maybe draw on both PRs? Or is there too much redundancy and we need to chalk this up to a mistake to learn from? @joaogui1 let me know if there is some course of action I can take to make this more right. (Also, maybe this is a good chance to highlight any other PRs of yours that we've let languish...) |
So, after reading his code I believe @jakevdp implementation is better than mine, so I will close my PR.
Also, if you guys want some help I can reopen #1874 and search for solved/obsolete issues and PRs so someone can close them and help organize things a little more |
I'm not able to get check_grads to pass consistently. It seems to be flaking on complex and/or low-rank inputs – 7 failures with |
Is it worth merging this even without gradients? (It seems that the problem is most likely not with the PR itself, given it doesn't implement any new gradients.) |
Perhaps – I'm planning on digging-in to implement gradients via a custom jvp, but I won't have time to look closely at that until next week. |
In offline discussion with @mattjj, we decided it would be worth submitting this even without robust gradient support. I removed the gradient test for now, and marked it TODO. |
Sounds good!
…On Mon, May 4, 2020 at 12:01 PM Jake Vanderplas ***@***.***> wrote:
In offline discussion with @mattjj <https://github.com/mattjj>, we
decided it would be worth submitting this even without robust gradient
support. I removed the gradient test for now, and marked it TODO.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#2744 (comment)>, or
unsubscribe
<https://github.com/notifications/unsubscribe-auth/AAJJFVUTZAEELGJO762HCHLRP4GJZANCNFSM4MJ7AYHA>
.
|
I just rebased on master to pick up changes there. PTAL - I think this is ready for a final review. |
This is an initial implementation of
np.linalg.lstsq
based on the SVD. A full solution would involve adding wrappers for*gelsd
to lax_linalg.py & cusolver.I estimate this is about 2x slower than the full solution, based on performance of the relevant lapack code paths in numpy.
Addresses part of #1999