Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Sparse] Leaf-level bitmask for CPU/CUDA #671

Closed
yuanming-hu opened this issue Mar 28, 2020 · 0 comments
Closed

[Sparse] Leaf-level bitmask for CPU/CUDA #671

yuanming-hu opened this issue Mar 28, 2020 · 0 comments
Assignees
Labels
feature request Suggest an idea on this project

Comments

@yuanming-hu
Copy link
Member

Concisely describe the proposed feature

As of v0.5.8 we haven't supported ti.bitmasked(ti.ij, 128).place(x), since we've made an assumption that place's father must be dense. We should remove this assumption.

Additional comments
The ultimate goal of Taichi sparse programming is to make algorithms fully independent of data structures. Assuming the finest level must be a dense node actually harms, since the data mask now has granularity of the dense node dim (e.g. 4x4x8, which is data-structure-dependent) instead of 1x1x1.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
feature request Suggest an idea on this project
Projects
None yet
Development

No branches or pull requests

1 participant