-
Notifications
You must be signed in to change notification settings - Fork 6.8k
added large tensor support for add_n and tests for more ops #16476
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM with minor nitpicks.
@sxjscience @apeforest @anirudh2290 @zheng-da @pengzhao-intel This PR is ready for review |
assert z[-1] == 2 | ||
z = nd.minimum(x, 5) | ||
assert z[0] == 3 | ||
assert z[-1] == 3 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
are the backward pass of any of these functions tested.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Only forward passes are required by DGL. So only forward passes are tested.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
is there some large tensor docs in mxnet website. That should explicitly call out that the backward pass has not been verified with large tensor.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I have to write that. Currently we don't have any docs explaining how to use large tensor support and caveats along with limitations and perf regressions.
014a137
to
1917af8
Compare
@anirudh2290 good to merge? |
1917af8
to
da721d8
Compare
@mxnet-label-bot add [pr-awaiting-merge] |
da721d8
to
c6ac555
Compare
Description
Added large tensor support for add_n and tests for:
load()
save()
add_n()
modulo()
maximum()
minimum()
Checklist
Essentials
Testing
Will run full suite of tests and paste here.