We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Describe the bug Related issue: #6615
When arch=ti.gpu, assertion failure terminates the for loops directly. However, the for loop is not terminated on the CPU backend.
arch=ti.gpu
To Reproduce
import taichi as ti ti.init(arch=ti.gpu) # when arch=ti.cpu the output is different @ti.kernel def test(): for i in range(3): assert i == 1.0 print(i) test()
Log/Screenshots When arch=ti.gpu the output is shown as:
[Taichi] version 1.3.0, llvm 15.0.4, commit 9c667572, linux, python 3.8.13 [Taichi] Starting on arch=cuda 1
When arch=ti.cpu the output is shown as:
arch=ti.cpu
[Taichi] version 1.3.0, llvm 15.0.4, commit 9c667572, linux, python 3.8.13 [Taichi] Starting on arch=x64 0 1 2 3
Additional comments The final IR results on both backends are the same except that grid_dim and block_dim are different.
grid_dim
block_dim
The text was updated successfully, but these errors were encountered:
lin-hitonami
No branches or pull requests
Describe the bug
Related issue: #6615
When
arch=ti.gpu
, assertion failure terminates the for loops directly. However, the for loop is not terminated on the CPU backend.To Reproduce
Log/Screenshots
When
arch=ti.gpu
the output is shown as:When
arch=ti.cpu
the output is shown as:Additional comments
The final IR results on both backends are the same except that
grid_dim
andblock_dim
are different.The text was updated successfully, but these errors were encountered: