-
Notifications
You must be signed in to change notification settings - Fork 915
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Handle constructing a cudf.Scalar
from a cudf.Scalar
#7639
Handle constructing a cudf.Scalar
from a cudf.Scalar
#7639
Conversation
assert x.value == y.value | ||
|
||
# check that this works: | ||
y.device_value |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should we test that the synchronization state of the result is the same as the original?
x._is_host_value_current == y._is_host_value_current
x._is_device_value_current == y._is_device_value_current
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'd be -1 to testing internals like this since they are implementation details that can easily change. But it looks like we're doing it already in these tests. It's your call: I'm happy to add it on
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
One can argue that we should have tests that test only API functionality - just checking that the results of calling our methods and functions are what we expect them to be.
Then there's things that we actually want to work a certain way under the hood, that can easily do something unexpected or unwanted when subject to changes - I'd argue we want to test that too. It's just that in lieu of doing it here, I am not sure where else to put them.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I definitely see your point, and have added the test as asked.
I think in this specific case, the only external/user-facing behaviour is performance, where ideally that would be captured in a benchmark rather than a test.
The problem with testing internals/optimizations is that when we do decide to change implementation details, we have to delete tests. At that point, there may be difficult-to-answer questions like:
- should we replace the tests we are deleting with new tests?
- will deleting these tests now cause some user-facing behaviour to be untested?
- who can decide?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Changes look good. Is there any reason we should make a copy?
To be clear, are you asking if we are currently making a copy? Or if we should consider making a copy? I think the current behaviour is not making a copy of the underlying (device) value. |
To clarify - the current implementation does not make a copy - I am wondering if there's any reason it should make a copy. I would rather it be as it currently is and don't see why we would want to change it. More asking if anyone can think of any reason I am not seeing. |
Got it. Yeah, good point. Given that |
Codecov Report
@@ Coverage Diff @@
## branch-0.19 #7639 +/- ##
===============================================
+ Coverage 81.86% 82.47% +0.60%
===============================================
Files 101 101
Lines 16884 17399 +515
===============================================
+ Hits 13822 14349 +527
+ Misses 3062 3050 -12
Continue to review full report at Codecov.
|
@gpucibot merge |
...also fix
DeviceScalar.__repr__
to print"DeviceScalar"
instead of"Scalar"
.