-
Notifications
You must be signed in to change notification settings - Fork 54
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add out
and where
args for ht.div
#945
Add out
and where
args for ht.div
#945
Conversation
GPU cluster tests are currently disabled on this Pull Request. |
CodeSee Review Map:Review in an interactive map View more CodeSee Maps Legend |
Codecov Report
@@ Coverage Diff @@
## main #945 +/- ##
=======================================
Coverage 95.39% 95.39%
=======================================
Files 65 65
Lines 9965 9976 +11
=======================================
+ Hits 9506 9517 +11
Misses 459 459
Flags with carried forward coverage won't be shown. Click here to find out more.
Continue to review full report at Codecov.
|
…nhan/heat into features/870-divide-kwargs
Found a cleaner way to handle indexing of the |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@neosunhan thank you so much for taking this on!
I have a few comments, the main thing is that where
should probably be addressed within _operations.__binary_op()
. This way, it would be available to all binary operations. And it would be easier to satisfy the condition that ht.divide(t1, t2)
returns out
, not t1
, where where
is False (which also holds for all numpy binary operations).
Please let me know if you need help with __binary_ops
!
heat/core/arithmetics.py
Outdated
@@ -438,6 +444,10 @@ def div(t1: Union[DNDarray, float], t2: Union[DNDarray, float]) -> DNDarray: | |||
The first operand whose values are divided | |||
t2: DNDarray or scalar | |||
The second operand by whose values is divided | |||
out: DNDarray, optional | |||
The output array. It must have a shape that the inputs broadcast to |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
shape and split
dimension
heat/core/arithmetics.py
Outdated
out: DNDarray, optional | ||
The output array. It must have a shape that the inputs broadcast to | ||
where: DNDarray, optional | ||
Condition of interest, where true yield divided value else yield original value in t1 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We should follow numpy.divide
, so ht.divide
should actually yield out
where where
is False (and uninitialized values when out=None
)
heat/core/arithmetics.py
Outdated
if where is not None: | ||
t2 = indexing.where(where, t2, 1) | ||
|
||
return _operations.__binary_op(torch.true_divide, t1, t2, out) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We should return out
instead of t1
where where
is False. And in fact this applies to all binary operations, as I admittedly had not realized when I created this issue.
So the way to go here would be to modify _operations.__binary_op
to accomodate the where
kwarg once and for all. Do you need help with that?
@ClaudiaComito Thanks for the feedback! I have taken a look at The main idea is to create a new DNDarray with uninitialized values (using |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Great job @neosunhan ! 👏 I have left some comments in the code.
heat/core/_operations.py
Outdated
out.larray[:] = operation( | ||
t1.larray.type(promoted_type), t2.larray.type(promoted_type), **fn_kwargs | ||
else: | ||
out_tensor = torch.empty(output_shape, dtype=promoted_type) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
2 comments here:
output_shape
is the global (memory-distributed) shape of the output DNDarray, here you're initializing a potentially huge torch tensor. In this case you should call
factories.empty(output_shape, dtype=..., split=..., device=...)
and that will take care of only initializing slices of the global array on each process
(I think this is also why the tests fail btw)
- if I understand the numpy docs correctly, this empty
out
DNDarray only needs to be initialized ifwhere
is not None.
@ClaudiaComito I have modified the code accordingly and added a few more tests. |
run tests |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@neosunhan thanks a lot. This looks good, as far as I can tell. Can you update the CHANGELOG? When you're done, we can run the GPU tests.
Updated CHANGELOG for the new Also I believe there was a typo in the CHANGELOG where the "Feature Additions" heading was repeated 3 times, which I have fixed. |
run tests |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@neosunhan please bear with me. While we're trying to figure out why the GPU tests fail, it occurred to me that we are not checking for where
's distribution scheme and whether it fits the way out
is distributed.
heat/core/_operations.py
Outdated
@@ -43,6 +44,8 @@ def __binary_op( | |||
The second operand involved in the operation, | |||
out: DNDarray, optional | |||
Output buffer in which the result is placed | |||
where: DNDarray, optional | |||
Condition of interest, where true yield the result of the operation else yield original value in out (uninitialized when out=None) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We can use numpy's docs for where
, I think they are a bit clearer. But we must expand on them a bit, e.g. is where
supposed/expected to be distributed, and how.
@ClaudiaComito I've added several test cases with different split configurations of |
@neosunhan If you look at the list of checks below, you will see that the continuous-integration/jenkins/pr-merge has failed. Click on details, and you can see that the tests passed on 1 process, but failed on 2. Do you need help setting up your environment for multi-process? If you have installed |
out
and where
args for ht.div
@ClaudiaComito Thanks for the help! I have added the check for |
run tests |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm ready to approve this, great job @neosunhan . Just a small documentation update needed.
heat/core/_operations.py
Outdated
will be set to the result of the operation. Elsewhere, the `out` array will retain its original | ||
value. If an uninitialized `out` array is created via the default `out=None`, locations within | ||
it where the condition is False will remain uninitialized. If distributed, must be distributed | ||
along the same dimension as the `out` array. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is only correct if where
and out
have the same shape.
For example if out
is (100, 10000) and distributed along the columns (out.split
is 1), where
is (10000,), their shapes are broadcastable but where
must be distributed along 0.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If distributed, the split axis (after broadcasting if required) must match that of the
out
array.
Would this phrasing be accurate?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yes, sounds accurate
run tests |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
One check can be saved
run tests |
run tests |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
thanks a lot @neosunhan !
Description
Implemention of
out
andwhere
functionality forht.divide
is fairly straightforward. However, use of both arguments at the same time causes complications due to lack of support for in-placewhere
in pytorch.Currently, I use a workaround by changing the underlying
torch.Tensor
of theout
DNDarray. Possible alternatives include updatingindexing.where
to allow in-place modification or implementing a function that allows in-place modification using a mask (similar tonumpy.copyto
).Issue/s resolved: #870
Changes proposed:
out
functionality oftrue_divide
in pytorch backendindexing.where
Type of change
New feature (non-breaking change which adds functionality)
Due Diligence
Does this change modify the behaviour of other functions? If so, which?
no