You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Make error reporting in PyTaichi more user-friendly and more comprehensive.
Make type-related parts in the compiler codebase easier to maintain.
Keep backward-compatibility if possible.
Premise
We prioritize pain points of AOT users, and thus those types frequently used in AOT are designed in a more self-contained way. Internal IR types are out of the scope of this plan.
Overview of the major types
In the diagram above, A --> B means A can be nested in B, and a dashed box means it can nest itself.
Q & A
Why not unify NdarrayType and TensorType?
Because shape of TensorType is known at compile time, which is close to backend array types; NdarrayType only has ndim at compile time, and needs to get bound to a buffer, which is more like the RWStructuredBuffer<T> in slang.
Why not unify StructType and ArgPackType?
Because ArgPackType needs to handle buffer bindings and can contain TextureType, while StructType corresponds to backend struct types and can be directly laid out in memory.
Details of types
(Layer 0) PrimitiveType
API of the type itself: ti.bool / (u/i)(8/16/32/64) / f(16/32/64)
Construction:
Python scope: no need to construct
Taichi scope: parameter; untyped literals (will become default_ip/fp if operated with a Taichi value); typename(literal) for typed literals; typename(varname) for type cast
Representation:
Python scope: not exist
Taichi scope: backend primitive types, in either local memory or shared memory
Operation: bool supports short-circuit and non-short-circuit logical operations; floating point numbers support arithmetic operations; integers support arithmetic operations and bit operations
Argument passing: Python value, which will be cast to the target type
Return value: a corresponding numpy type
(Layer 1) TensorType
API of the type itself: ti.types.vector(n, dtype); ti.types.matrix(n, m, dtype)
Construction:
Python/Taichi scope: typename([...]) or ti.Vector/Matrix([...]) (in this case type will be inferred automatically)
Taichi scope: parameter; typename(varname) for type cast
Representation:
Python scope: Tensor class, where a numpy array is stored
Taichi scope: backend array types (for special cases like vec4, backend vec4 type will be used), in either local memory or shared memory
Operation: support element-wise operations that dtype supports; support matrix operations in addition
Argument passing: Python list or Tensor class
Return value: Tensor class
(Layer 1) StructType
API of the type itself:
ti.types.struct(a=dtype_a, b=dtype_b)
ti.dataclass
Construction:
Python/Taichi scope: typename([...]) or ti.Struct(a=..., b=...) (in this case type will be inferred automatically)
Taichi scope: parameter; typename(varname) for type cast
Representation:
Python scope: Struct class, where numpy value is stored
Taichi scope: backend struct types, where layout can be designated, in either local memory or shared memory
Operation: no support
Argument passing: Python dict or Struct class
Return value: Struct class
(Layer 2) NdarrayType
API of the type itself: ti.types.ndarray(ndim=..., dtype=...)
Construction:
Python scope: ti.ndarray()
Taichi scope: parameter
Representation:
Python scope: Ndarray class (actually a DeviceAllocation)
Taichi scope: BufferBind
Argument passing: bind buffer
(Layer 2) TextureType
API of the type itself: ti.types.texture(ndim=...) / ti.types.rw_texture(ndim=..., lod=..., fmt=...)
Construction:
Python scope: ti.Texture()
Taichi scope: parameter
Representation:
Python scope: Texture class (actually a DeviceAllocation)
Taichi scope: TextureBind
Argument passing: bind buffer
(Layer 3) ArgPackType
API of the type itself: ti.types.arg_pack(a=..., b=...)
Construction:
Python scope: ti.ArgPack(a=..., b=...)
Taichi scope: parameter
Representation:
Python scope: ArgPack class
Taichi scope: binding info
All parameters of a kernel will be implicitly constructed as an ArgPack; same for return values
Argument passing: for buffer types (NdarrayType and TextureType), bind those buffers; for other types, lay those arguments into an argument buffer
(Layer 4) FunctionType
It will be used to represent the signature of real (or internal) functions and kernels. Details TBD.
SNodeTree-related types
Plan A (feasible solution for now): SNode is not explicitly exposed to AOT users; it can only be used inside kernels (not as arguments)
implicit root buffer, which can be part of an ArgPack
use FieldExpression to represent a field; its type can be set as Tensor of Struct/Tensor/Primitive to ease type check
for other SNode ops, the SNode will still be directly embedded into the op
Plan B (future work): allow instantiating SNodeTrees and let AOT users pass in a buffer for a single SNodeTree
Why not combine type annotations and the actual instances (e.g. def kern(a: ti.ndarray))?
The actual instance might not be an instance of the type annotation. ti.types.ndarray can actually take a ti.ndarray, a numpy array, or a torch tensor.
What does ti.template() mean in ti.kernel?
It's similar to N in template<int N> in C++. Here is one attempt to distinguish it from normal parameters:
Why does ti.template() accept a field but not a ndarray?
ndarray actually needs a parameter position to pass the DeviceAllocation, while ti.template() has no actual parameter position
Action items
General: Update the language reference to describe the type system systematically
General: Solve the API inconsistency problem (Upper/lower case: ti.Texture vs ti.ndarray vs ti.field vs ti.Vector; Order/Name of parameters: ndim vs num_dimensions; ...)
PrimitiveType: refine f16 to make it as mature as other primitive types; throw an error on backends lacking f16 support
PrimitiveType: add bool type (backend implementation can still use i32 but in CHI IR they should be distinguished clearly) (Related: Boolean (u1) type support #577)
Goal
Premise
We prioritize pain points of AOT users, and thus those types frequently used in AOT are designed in a more self-contained way. Internal IR types are out of the scope of this plan.
Overview of the major types
In the diagram above,
A --> B
meansA
can be nested inB
, and a dashed box means it can nest itself.Q & A
NdarrayType
andTensorType
?TensorType
is known at compile time, which is close to backend array types;NdarrayType
only hasndim
at compile time, and needs to get bound to a buffer, which is more like theRWStructuredBuffer<T>
in slang.StructType
andArgPackType
?ArgPackType
needs to handle buffer bindings and can containTextureType
, whileStructType
corresponds to backend struct types and can be directly laid out in memory.Details of types
(Layer 0) PrimitiveType
ti.bool / (u/i)(8/16/32/64) / f(16/32/64)
default_ip/fp
if operated with a Taichi value);typename(literal)
for typed literals;typename(varname)
for type castbool
supports short-circuit and non-short-circuit logical operations; floating point numbers support arithmetic operations; integers support arithmetic operations and bit operations(Layer 1) TensorType
ti.types.vector(n, dtype)
;ti.types.matrix(n, m, dtype)
typename([...])
orti.Vector/Matrix([...])
(in this case type will be inferred automatically)typename(varname)
for type castTensor
class, where a numpy array is storeddtype
supports; support matrix operations in additionTensor
classTensor
class(Layer 1) StructType
ti.types.struct(a=dtype_a, b=dtype_b)
ti.dataclass
typename([...])
orti.Struct(a=..., b=...)
(in this case type will be inferred automatically)typename(varname)
for type castStruct
class, where numpy value is storedStruct
classStruct
class(Layer 2) NdarrayType
ti.types.ndarray(ndim=..., dtype=...)
ti.ndarray()
Ndarray
class (actually aDeviceAllocation
)BufferBind
(Layer 2) TextureType
ti.types.texture(ndim=...)
/ti.types.rw_texture(ndim=..., lod=..., fmt=...)
ti.Texture()
Texture
class (actually aDeviceAllocation
)TextureBind
(Layer 3) ArgPackType
ti.types.arg_pack(a=..., b=...)
ti.ArgPack(a=..., b=...)
ArgPack
classArgPack
; same for return valuesNdarrayType
andTextureType
), bind those buffers; for other types, lay those arguments into an argument buffer(Layer 4) FunctionType
It will be used to represent the signature of real (or internal) functions and kernels. Details TBD.
SNodeTree-related types
ArgPack
FieldExpression
to represent a field; its type can be set asTensor
ofStruct/Tensor/Primitive
to ease type checkMesh / SparseMatrixBuilder / Quant
No change for now.
Q & A
def kern(a: ti.ndarray)
)?ti.types.ndarray
can actually take ati.ndarray
, a numpy array, or a torch tensor.ti.template()
mean inti.kernel
?N
intemplate<int N>
in C++. Here is one attempt to distinguish it from normal parameters:ti.template()
accept a field but not a ndarray?ndarray
actually needs a parameter position to pass theDeviceAllocation
, whileti.template()
has no actual parameter positionAction items
ti.Texture
vsti.ndarray
vsti.field
vsti.Vector
; Order/Name of parameters:ndim
vsnum_dimensions
; ...)f16
to make it as mature as other primitive types; throw an error on backends lacking f16 supportbool
type (backend implementation can still usei32
but in CHI IR they should be distinguished clearly) (Related: Boolean (u1) type support #577)ti.template()
inti.func
ti.types.ndarray(dtype=ti.types.vector(dtype=ti.i32))
(Related: Save element_dim parameter for taichi.types.ndarray. #7231)Future work
The text was updated successfully, but these errors were encountered: