-
Notifications
You must be signed in to change notification settings - Fork 2.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Experimental support for ti.func multi-return with scalar values #536 (3) #543
Conversation
no print_preprocessed or 1
fix TI_PRINT_PROCESSED no visit_Return from taichi-dev#536
Thanks! This is really helpful. Sorry about the forced-push. I had to maintain a clear changeset for reviewing. |
Would it be possible to briefly test this? For example, |
This is breaking test because ti.Matrix return is not supported. Should |
How to zero-initialize given types? eg. |
Suggestions are super welcome. I use a new function |
Sorry I was on a meeting this morning. Can we try something like |
That's exactly what I've done. The difference is because, we can't write |
Check out this example: import taichi as ti
ti.init()#print_preprocessed=True)
x = ti.var(ti.f32, ())
u = ti.Vector(2, dt=ti.f32, shape=())
z = ti.Vector(3, dt=ti.f32, shape=())
@ti.kernel
def func(t: ti.f32):
a = ti.Vector([2, 3])
s = fake_absolute(t)
x[None] = s
u[None] = normalized(a)
z[None] = zero_by_default(a)
@ti.func
def fake_absolute(x): # will guess return type by alloca, better not
if x > 0:
return float(x)
else:
return int(-x)
@ti.func
def zero_by_default(x) -> ti.vec(3): # different from ti.Vector (the value container), ti.vec is only used as type signature
pass
@ti.func
def normalized(v) -> ti.vec(2, dt=ti.f32):
return v / 2
func(233.33)
print(x[None])
func(-233.33)
print(x[None])
print(u[None][0])
print(u[None][1])
print(z[None][2]) |
Considering the fact that
I suggest
The switch to inline-based functions to IR-based functions will not be too hard if we follow this strategy. |
This is worked only with inline ones, right? Like |
Sounds cool! Will work on this trr. |
Got |
This is likely because a |
Tried adding to different place, become |
Oh, please print out the LLVM IR to see if you have a return statement at the middle of any basic block. Note that LLVM follows a control flow graph + basic block IR strategy, and return statements can only be the last statement of each basic block. |
(The function is inlined for now) |
Oh I see. For the LLVM backend, maybe we can simply create a real LLVM function ( Or maybe it's easier if you start with adding function body definition/function calls to your OpenGL backend, since you will then be issuing C++ code which is much easier to debug. (Of course we will need a function inlining pass sooner or later) |
After digging into your implementation, I feel like there's a misunderstanding. We want to end up with an IR system that supports function calls, for example
|
After implementing this IR extension, multi-return functions could be handled systematically. However, this may be a lot of work and probably we should use multiple PRs. I can go ahead and implement the scaffold of the IR system if you want. (Originally the quick & hacky solution is to create a local variable in the Python scope to store the return value, without creating any Taichi functions since the Python function will get inlined. However, that's just a temporary solution.) |
Was get rid of |
When a function is inlined, it's return value should be store in a local variable for its caller to read, instead of creating a return. In LLVM, However, ideally, we don't want to inline everything - that's why we are creating functions in Taichi IR, instead of inlining everything in the Python scope. |
Oh that would be great! I don't have too much time on reading LLVM docs during school days, if you could help, may helps me a lot! |
Sounds good! I may be slow since I have many things to do simultaneously, but I already have a clear plan in my mind :-) |
So
Yeah, maybe translate |
It's just like
The Python-side AST generation currently does the inlining job. You are right that the mechanism is the same as I have to go to sleep as I have an early meeting tomorrow morning. Good night! |
Have a good dream! |
How to add |
tmp give up, since not in v0.6.0 roadmap. |
Let me spend a few more days further systematically thinking about this issue. Introducing new components to the IR system needs careful considerations. Sorry about the delay. |
I found that |
That's exactly what I'm thinking about. We need the IR to lower the compound return values into what LLVM supports. Again, designing a new IR extension needs a lot of careful considerations and I would spend some time on it before starting implementation. |
Replaced by #612 |
Related issue id = #536