-
Notifications
You must be signed in to change notification settings - Fork 54
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Bug/591 slicing memory issues #594
Conversation
Codecov Report
@@ Coverage Diff @@
## master #594 +/- ##
==========================================
+ Coverage 96.47% 96.48% +0.01%
==========================================
Files 75 75
Lines 15169 15214 +45
==========================================
+ Hits 14634 14679 +45
Misses 535 535
Continue to review full report at Codecov.
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Mostly minor changes. A job well done. Thanks Daniel
Description
getitem
/setitem
now use tuples for most key types.Advanced indexing is also implemented via this approach. keys for advanced indexing can be
lists
,torch.Tensors
, orht.DNDarrays
. This means that the results ofwhere
andnonzero
can be fed directly into another DNDarray as a key. However, it also means thatlist
s are treated differently thantuple
s when given as a keyIssue/s resolved: #591 #505
Changes proposed:
getitem
/setitem
create a tuple for a given keygetitem
/setitem
reduce_op
neutral value array created using the gshape instead of the lshapeType of change
Due Diligence
Does this change modify the behaviour of other functions? If so, which?
yes, any function which uses
where
ornonzero
can now use the DNDarray instead of a list.