Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Refactor] [lang] Deprecate x.data_type() and use x.dtype instead #1374

Merged
Merged
Show file tree
Hide file tree
Changes from 6 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
131 changes: 94 additions & 37 deletions docs/meta.rst
Original file line number Diff line number Diff line change
Expand Up @@ -17,55 +17,124 @@ Taichi kernels are *lazily instantiated* and a lot of computation can happen at
Template metaprogramming
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I need to take a more careful pass on this part, which may take me a bit longer time. If you want to have this PR merged right now, could you move this file into a different PR? I'll take a look later. (Hopefully moving a single file isn't creating too much trouble for you... Sorry for being too busy these days...)

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done, I'm super familiar with git revert these days, and may I merge this now?

------------------------

Taichi tensors oftenly are used as globals. But you may use ``ti.template()``
as type hint to pass a tensor as argument. For example:

.. code-block:: python

@ti.kernel
def copy(x: ti.template(), y: ti.template()):
for i in x:
y[i] = x[i]

a = ti.var(ti.f32, 4)
b = ti.var(ti.f32, 4)
c = ti.var(ti.f32, 12)
d = ti.var(ti.f32, 12)
copy(a, b)
copy(c, d)


As shown by the above example, template programming may enable us to reuse our
code and improve better flexibility.


Dimensionality-independent programming using grouped indices
------------------------------------------------------------

However, the ``copy`` template shown above is not perfect, i.e., it can only be
used to copy 1D tensors. What if we want to copy 2D tensors? Do we have to write
another kernel?

.. code-block:: python

@ti.kernel
def copy2d(x: ti.template(), y: ti.template()):
for i, j in x:
y[i, j] = x[i, j]

Not necessary! Taichi provides ``ti.grouped`` syntax which enable you to get
loop indices into a grouped vector, therefore unify different dimensionalities.
For example:

.. code-block:: python

@ti.kernel
def copy(x: ti.template(), y: ti.template()):
for I in ti.grouped(y):
# I is a vector with same dimensionality with x and data type i32
# If y is 0D, then I = None
# If y is 1D, then I = ti.Vector([i])
# If y is 2D, then I = ti.Vector([i, j])
# If y is 3D, then I = ti.Vector([i, j, k])
# ...
x[I] = y[I]

@ti.kernel
def array_op(x: ti.template(), y: ti.template()):
# If tensor x is 2D
for I in ti.grouped(x): # I is a vector of size x.dim() and data type i32
# if tensor x is 2D:
for I in ti.grouped(x): # I is simply a 2D vector with x data type i32
y[I + ti.Vector([0, 1])] = I[0] + I[1]
# is equivalent to

# then it is equivalent to:
for i, j in x:
y[i, j + 1] = i + j

Tensor size reflection
----------------------

Sometimes it will be useful to get the dimensionality (``tensor.dim()``) and shape (``tensor.shape()``) of tensors.
These functions can be used in both Taichi kernels and python scripts.
Tensor meta data
----------------

Sometimes it will be useful to get the data type (``tensor.dtype``) and shape (``tensor.shape``) of tensors.
These attributes can be accessed in both Taichi kernels and python scripts.

.. code-block:: python

@ti.func
def print_tensor_size(x: ti.template()):
print(x.dim())
for i in ti.static(range(x.dim())):
print(x.shape()[i])
def print_tensor_info(x: ti.template()):
print('Tensor dimensionality is', len(x.shape))
for i in ti.static(range(len(x.shape))):
print('Size alone dimension', i, 'is', x.shape[i])
ti.static_print('Tensor data type is', x.dtype)

See :ref:`scalar_tensor` for more details.

.. note::

For sparse tensors, the full domain shape will be returned.


Matrix & vector meta data
-------------------------

Sometimes it will also be useful to get the matrix column and row numbers when
you want to write dimensionality-independent code, such as reusing code between
2D/3D physical simulations.

``matrix.m`` equals to the column number of matrix, while ``matrix.n`` equals to
the row number of matrix.
Since vectors are considered as matrices with one column, ``vector.n`` is simply
the dimensionality of vector.

.. code-block:: python

@ti.kernel
def foo():
matrix = ti.Matrix([[1, 2], [3, 4], [5, 6]])
print(matrix.n) # 2
print(matrix.m) # 3
vector = ti.Vector([7, 8, 9])
print(vector.n) # 3
print(vector.m) # 1


For sparse tensors, the full domain shape will be returned.

Compile-time evaluations
------------------------

Using compile-time evaluation will allow certain computations to happen when kernels are being instantiated.
This saves the overhead of those computations at runtime.

* Use ``ti.static`` for compile-time branching (for those who come from C++17, this is `if constexpr <https://en.cppreference.com/w/cpp/language/if>`_.)
* Use ``ti.static`` for compile-time branching (for those who come from C++17, this is `if constexpr <https://en.cppreference.com/w/cpp/language/if>`_.):

.. code-block:: python

Expand All @@ -77,32 +146,20 @@ This saves the overhead of those computations at runtime.
x[0] = 1


* Use ``ti.static`` for forced loop unrolling
* Use ``ti.static`` for forced loop unrolling:

.. code-block:: python

@ti.kernel
def g2p(f: ti.i32):
for p in range(0, n_particles):
base = ti.cast(x[f, p] * inv_dx - 0.5, ti.i32)
fx = x[f, p] * inv_dx - ti.cast(base, real)
w = [0.5 * ti.sqr(1.5 - fx), 0.75 - ti.sqr(fx - 1.0),
0.5 * ti.sqr(fx - 0.5)]
new_v = ti.Vector([0.0, 0.0])
new_C = ti.Matrix([[0.0, 0.0], [0.0, 0.0]])

# Unrolled 9 iterations for higher performance
for i in ti.static(range(3)):
for j in ti.static(range(3)):
dpos = ti.cast(ti.Vector([i, j]), real) - fx
g_v = grid_v_out[base(0) + i, base(1) + j]
weight = w[i](0) * w[j](1)
new_v += weight * g_v
new_C += 4 * weight * ti.outer_product(g_v, dpos) * inv_dx

v[f + 1, p] = new_v
x[f + 1, p] = x[f, p] + dt * v[f + 1, p]
C[f + 1, p] = new_C
@ti.kernel
def func():
for i in ti.static(range(4)):
print(i)

# is equivalent to:
print(0)
print(1)
print(2)
print(3)


When to use for loops with ``ti.static``
Expand All @@ -120,7 +177,7 @@ For example, code for resetting this tensor of vectors should be
@ti.kernel
def reset():
for i in x:
for j in ti.static(range(3)):
for j in ti.static(range(x.n)):
# The inner loop must be unrolled since j is a vector index instead
# of a global tensor index.
x[i][j] = 0
28 changes: 6 additions & 22 deletions docs/scalar_tensor.rst
Original file line number Diff line number Diff line change
Expand Up @@ -107,49 +107,33 @@ You can access an element of the Taichi tensor by an index or indices.
Meta data
---------

.. function:: a.dim()

:parameter a: (Tensor) the tensor
:return: (scalar) the length of ``a``

::

x = ti.var(ti.i32, (6, 5))
x.dim() # 2

y = ti.var(ti.i32, 6)
y.dim() # 1

z = ti.var(ti.i32, ())
z.dim() # 0


.. function:: a.shape()
.. attribute:: a.shape

:parameter a: (Tensor) the tensor
:return: (tuple) the shape of tensor ``a``

::

x = ti.var(ti.i32, (6, 5))
x.shape() # (6, 5)
x.shape # (6, 5)

y = ti.var(ti.i32, 6)
y.shape() # (6,)
y.shape # (6,)

z = ti.var(ti.i32, ())
z.shape() # ()
z.shape # ()


.. function:: a.data_type()
.. function:: a.dtype

:parameter a: (Tensor) the tensor
:return: (DataType) the data type of ``a``

::

x = ti.var(ti.i32, (2, 3))
x.data_type() # ti.i32
x.dtype # ti.i32


.. function:: a.parent(n = 1)
Expand Down
50 changes: 14 additions & 36 deletions docs/snode.rst
Original file line number Diff line number Diff line change
Expand Up @@ -33,32 +33,19 @@ See :ref:`layout` for more details. ``ti.root`` is the root node of the data str
assert x.snode() == y.snode()


.. function:: tensor.shape()
.. function:: tensor.shape

:parameter tensor: (Tensor)
:return: (tuple of integers) the shape of tensor

Equivalent to ``tensor.snode().shape()``.
Equivalent to ``tensor.snode().shape``.

For example,

::

ti.root.dense(ti.ijk, (3, 5, 4)).place(x)
x.shape() # returns (3, 5, 4)


.. function:: tensor.dim()

:parameter tensor: (Tensor)
:return: (scalar) the dimensionality of the tensor

Equivalent to ``len(tensor.shape())``.

::

ti.root.dense(ti.ijk, (8, 9, 10)).place(x)
x.dim() # 3
x.shape # returns (3, 5, 4)


.. function:: tensor.snode()
Expand All @@ -74,7 +61,7 @@ See :ref:`layout` for more details. ``ti.root`` is the root node of the data str
x.snode()


.. function:: snode.shape()
.. function:: snode.shape

:parameter snode: (SNode)
:return: (tuple) the size of node along that axis
Expand All @@ -85,29 +72,16 @@ See :ref:`layout` for more details. ``ti.root`` is the root node of the data str
blk2 = blk1.dense(ti.i, 3)
blk3 = blk2.dense(ti.jk, (5, 2))
blk4 = blk3.dense(ti.k, 2)
blk1.shape() # ()
blk2.shape() # (3, )
blk3.shape() # (3, 5, 2)
blk4.shape() # (3, 5, 4)


.. function:: snode.dim()

:parameter snode: (SNode)
:return: (scalar) the dimensionality of ``snode``

Equivalent to ``len(snode.shape())``.

::

blk1 = ti.root.dense(ti.ijk, (8, 9, 10))
ti.root.dim() # 0
blk1.dim() # 3
blk1.shape # ()
blk2.shape # (3, )
blk3.shape # (3, 5, 2)
blk4.shape # (3, 5, 4)


.. function:: snode.parent()
.. function:: snode.parent(n = 1)

:parameter snode: (SNode)
:parameter n: (optional, scalar) the number of parent steps, i.e. ``n=1`` for parent, ``n=2`` grandparent, etc.
:return: (SNode) the parent node of ``snode``

::
Expand All @@ -118,6 +92,10 @@ See :ref:`layout` for more details. ``ti.root`` is the root node of the data str
blk1.parent() # ti.root
blk2.parent() # blk1
blk3.parent() # blk2
blk3.parent(1) # blk2
blk3.parent(2) # blk1
blk3.parent(3) # ti.root
blk3.parent(4) # None


Node types
Expand Down
3 changes: 3 additions & 0 deletions docs/vector.rst
Original file line number Diff line number Diff line change
Expand Up @@ -191,6 +191,9 @@ Methods
Vectors are special matrices with only 1 column. In fact, ``ti.Vector`` is just an alias of ``ti.Matrix``.


Meta data
---------
archibate marked this conversation as resolved.
Show resolved Hide resolved

.. attribute:: a.n

:parameter a: (Vector or tensor of Vector)
Expand Down
16 changes: 10 additions & 6 deletions python/taichi/lang/expr.py
Original file line number Diff line number Diff line change
Expand Up @@ -68,7 +68,7 @@ def initialize_accessor(self):
return
snode = self.ptr.snode()

if self.snode().data_type() == f32 or self.snode().data_type() == f64:
if self.dtype == f32 or self.dtype == f64:

def getter(*key):
assert len(key) == taichi_lang_core.get_max_num_indices()
Expand All @@ -78,7 +78,7 @@ def setter(value, *key):
assert len(key) == taichi_lang_core.get_max_num_indices()
snode.write_float(key, value)
else:
if taichi_lang_core.is_signed(self.snode().data_type()):
if taichi_lang_core.is_signed(self.dtype):

def getter(*key):
assert len(key) == taichi_lang_core.get_max_num_indices()
Expand Down Expand Up @@ -135,15 +135,19 @@ def shape(self):
def dim(self):
return len(self.shape)

@property
def dtype(self):
return self.snode().dtype

@deprecated('x.data_type()', 'x.dtype')
def data_type(self):
return self.snode().data_type()
return self.snode().dtype

@python_scope
def to_numpy(self):
from .meta import tensor_to_ext_arr
import numpy as np
arr = np.zeros(shape=self.shape,
dtype=to_numpy_type(self.snode().data_type()))
arr = np.zeros(shape=self.shape, dtype=to_numpy_type(self.dtype))
tensor_to_ext_arr(self, arr)
import taichi as ti
ti.sync()
Expand All @@ -154,7 +158,7 @@ def to_torch(self, device=None):
from .meta import tensor_to_ext_arr
import torch
arr = torch.zeros(size=self.shape,
dtype=to_pytorch_type(self.snode().data_type()),
dtype=to_pytorch_type(self.dtype),
device=device)
tensor_to_ext_arr(self, arr)
import taichi as ti
Expand Down
Loading