Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[bug] [refactor] Fix error when ti.init() not called by deprecating Expr.layout_materialized #1347

Merged
6 changes: 3 additions & 3 deletions python/taichi/lang/expr.py
Original file line number Diff line number Diff line change
Expand Up @@ -36,7 +36,7 @@ def __init__(self, *args, tb=None):

@python_scope
def __setitem__(self, key, value):
impl.get_runtime().try_materialize()
impl.get_runtime().materialize()
self.initialize_accessor()
if key is None:
key = ()
Expand All @@ -49,7 +49,7 @@ def __setitem__(self, key, value):

@python_scope
def __getitem__(self, key):
impl.get_runtime().try_materialize()
impl.get_runtime().materialize()
self.initialize_accessor()
if key is None:
key = ()
Expand Down Expand Up @@ -114,7 +114,7 @@ def fill(self, val):
from .meta import fill_tensor
fill_tensor(self, val)

@deprecated('tensor.parent()', 'tensor.snode().parent()')
#@deprecated('tensor.parent()', 'tensor.snode().parent()')
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So we no longer deprecate this?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, since tensor.parent() returns a tensor while tensor.snode().parent() return a snode, it seems this is irreplaceable and many test is using this..

def parent(self, n=1):
import taichi as ti
p = self.snode().parent(n)
Expand Down
18 changes: 8 additions & 10 deletions python/taichi/lang/impl.py
Original file line number Diff line number Diff line change
Expand Up @@ -143,7 +143,6 @@ def __init__(self, kernels=None):
self.target_tape = None
self.inside_complex_kernel = False
self.kernels = kernels or []
Expr.materialize_layout_callback = self.materialize

def get_num_compiled_functions(self):
return len(self.compiled_functions) + len(self.compiled_grad_functions)
Expand All @@ -162,15 +161,10 @@ def create_program(self):
if self.prog is None:
self.prog = taichi_lang_core.Program()

def try_materialize(self):
if not Expr.layout_materialized:
Expr.materialize_layout_callback()

def materialize(self):
if self.materialized:
return
self.create_program()
Expr.layout_materialized = True

def layout():
for func in self.layout_functions:
Expand All @@ -188,8 +182,7 @@ def clear(self):
if self.prog:
self.prog.finalize()
self.prog = None
Expr.materialize_layout_callback = None
Expr.layout_materialized = False
self.materialized = False

def get_tape(self, loss=None):
from .tape import Tape
Expand Down Expand Up @@ -279,8 +272,13 @@ def var(dt, shape=None, offset=None, needs_grad=False):
assert (offset is not None and shape is None
) == False, f'The shape cannot be None when offset is being set'

assert not get_runtime(
).materialized, 'No new variables can be declared after kernel invocations or Python-scope tensor accesses.'
if get_runtime().materialized:
raise RuntimeError(
"No new variables can be declared after materialization, i.e. kernel invocations "
"or Python-scope tensor accesses. I.e. only after all the data layouts are defined, "
"we can initialize our data and run computation. Try append ti.init() or ti.reset() "
"right after 'import taichi as ti' if you are using Jupyter notebook."
archibate marked this conversation as resolved.
Show resolved Hide resolved
)

# primal
x = Expr(taichi_lang_core.make_id_expr(""))
Expand Down
4 changes: 2 additions & 2 deletions python/taichi/lang/snode.py
Original file line number Diff line number Diff line change
Expand Up @@ -53,7 +53,7 @@ def lazy_grad(self):
self.ptr.lazy_grad()

def parent(self, n=1):
impl.get_runtime().try_materialize()
impl.get_runtime().materialize()
p = self.ptr
while p and n > 0:
p = p.parent
Expand All @@ -73,7 +73,7 @@ def dim(self):

@property
def shape(self):
impl.get_runtime().try_materialize()
impl.get_runtime().materialize()
dim = self.ptr.num_active_indices()
ret = [self.ptr.get_num_elements_along_axis(i) for i in range(dim)]

Expand Down
50 changes: 50 additions & 0 deletions tests/python/test_000_runtime.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,50 @@
import taichi as ti


# The first test to run, ever:
def test_000_without_init():
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What does 000 mean here and in the file name?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Seems pytest will starts from the lowest character order, so I use 000 to make test_000_without_init starts at the very first time when ti.init() has never been called in @ti.all_archs, to test if Taichi is functional even without ti.init().

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Interesting - maybe we should simply let ti.reset clear everything, as if no ti.init() has been called? This will make the system more testable.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ti.reset()

I want to test if non of ti.reset or ti.init is called is working.
So, I'm creating a sandbox Taichi by creating a taichi_temp_test.py for test.

Btw, we may add a ti.reset() to taichi startup script to make your test method work? Not sure if this can crash doc-bots & make startup slower.

assert ti.cfg.arch == ti.cpu

x = ti.var(ti.i32, (2, 3))
assert x.shape == (2, 3)

x[1, 2] = 4
assert x[1, 2] == 4


@ti.all_archs
@ti.all_archs
@ti.must_throw(RuntimeError)
def test_materialization_after_kernel():
x = ti.var(ti.f32, (3, 4))

@ti.kernel
def func():
print(x[2, 3])

func()

y = ti.var(ti.f32, (2, 3))
# ERROR: No new variable should be declared after kernel invocation!


@ti.all_archs
@ti.must_throw(RuntimeError)
def test_materialization_after_access():
x = ti.var(ti.f32, (3, 4))

print(x[2, 3])

y = ti.var(ti.f32, (2, 3))
# ERROR: No new variable should be declared after Python-scope tensor access!


@ti.all_archs
@ti.must_throw(RuntimeError)
def test_materialization_after_get_shape():
x = ti.var(ti.f32, (3, 4))

print(x.shape)

y = ti.var(ti.f32, (2, 3))
# ERROR: No new variable should be declared after Python-scope tensor access!