-
Notifications
You must be signed in to change notification settings - Fork 9
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
split and where translations #6
Conversation
Codecov Report
@@ Coverage Diff @@
## master #6 +/- ##
==========================================
+ Coverage 97.30% 97.45% +0.14%
==========================================
Files 1 1
Lines 482 510 +28
==========================================
+ Hits 469 497 +28
Misses 13 13
Continue to review full report at Codecov.
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM.
Yeah tensorflow seems to be implementing a much more numpy compatible interface which should make life easier. And torch similarly seems to be planning to add more numpy API like functions to torch.linalg
eventually. No worries about cupy
as it very explicitly mimics the numpy
interface.
tests/test_autoray.py
Outdated
if backend == "dask": | ||
pytest.xfail("dask doesn't support split yet") | ||
A = ar.do("ones", (10, 20, 10), like=backend) | ||
sections = [2, 4, 14] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is it possible to parametrize sections
with a int case as well? then we'll have full test coverage for the translations.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good thing you noticed, implementation for int case for torch was wrong (torch takes split size as opposed to number of splits), should be fixed now.
Nice, thanks for the extra test coverage and PR! |
Note I've updated |
As mentioned in #5
np.diff
didn't seem worthwhile. There istf.experimental.numpy.diff
, but it's only on newest version of tensorflow, and probably in a later version it will become justtf.diff
.split
can probably be unified into one function, but this would mean wrapping a translation around the wrapper I made for torch since syntax is slightly different. Both typically take a list as input if using sections to split the array, so I don't think my wrapper brings that much extra overhead.tensor_split
since it's not yet in the stable release. Maybe one can do different things in autoray depending on torch version number, but that doesn't seem very elegant.