-
Notifications
You must be signed in to change notification settings - Fork 2.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Sparse] [lang] Add ti.activate support #1334
Conversation
Codecov Report
@@ Coverage Diff @@
## master #1334 +/- ##
=======================================
Coverage 85.48% 85.49%
=======================================
Files 19 19
Lines 3375 3377 +2
Branches 630 630
=======================================
+ Hits 2885 2887 +2
Misses 358 358
Partials 132 132
Continue to review full report at Codecov.
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks! i have one nit for the test
n = 16 | ||
|
||
ptr = ti.root.pointer(ti.i, n) | ||
ptr.dense(ti.i, n).place(x) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: Actually WDYT if we remove this dense
layer, i.e. ptr.place(x)
? If so, s[None]
should be exactly how many elements in x
that are activated. It makes the test more obvious?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe we want to test the behavior of multi-layer sparse, i.e. make OpenGL fake dynamic
support fail to test.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks! LGTM.
Related issue = #1256 (comment)
[Click here for the format server]
Would you write a test for me? I'm not super familiar with sparse.