Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

v0.7.0 Roadmap #677

Closed
10 tasks done
yuanming-hu opened this issue Mar 29, 2020 · 24 comments
Closed
10 tasks done

v0.7.0 Roadmap #677

yuanming-hu opened this issue Mar 29, 2020 · 24 comments
Assignees
Milestone

Comments

@yuanming-hu
Copy link
Member

yuanming-hu commented Mar 29, 2020

@archibate
Copy link
Collaborator

archibate commented Mar 29, 2020

Restore vectorization on CPUs

Vectorization? This may also apply to GLSL since it provides vec3 or vec4 just like mmintrin's on x64 CPU. Not sure if this have any performance boost compared to non-vec ones on GL.

Restore automatic shared memory utilization on GPUs

Is this specific to CUDA? Or may also applied to Metal&OpenGL backend?

@archibate
Copy link
Collaborator

Also add real function support #602

@yuanming-hu
Copy link
Member Author

Vectorization? This may also apply to GLSL since it provides vec3 or vec4 just like mmintrin's on x64 CPU. Not sure if this have any performance boost compared to non-vec ones on GL.

vec3/4 in OpenGL is not really vectorization - they are still represented by a few 32-bit registers instead of a wide SIMD register.

Restore automatic shared memory utilization on GPUs

Is this specific to CUDA? Or may also applied to Metal&OpenGL backend?

Good point. In the future, we should do this for other GPU backends with shared memory support as well.

@k-ye
Copy link
Member

k-ye commented Jun 6, 2020

Good point. In the future, we should do this for other GPU backends with shared memory support as well.

Coming late on this. Metal does support shared memory as well. So if this optimization can be expressed at the CHI level, it should be do-able on Metal..

@archibate
Copy link
Collaborator

archibate commented Jul 27, 2020

@yuanming-hu Hi, what's the progress? Would you update the 'ticks' in issue description?

@Headcrabed
Copy link

Does "AMDGPU backend" means Taichi would support OpenCL?

@Rullec
Copy link
Contributor

Rullec commented Sep 11, 2020

cpplint and pylint

Yes, if you're familiar about them or having free time :>

Move to multi-pass Python AST transformer

In fact the current AST transformer is so dedicated that even me didn't dear to refactor easily.. but ff2 contribute if you're familiar with Python's built-in ast module.

Hi all,
So I came back to this roadmap again after my refactor work finished.
Cleaner codes are really scented, so I'm quite interested in these cpplint and pylint items and the Move to multi-pass Python AST transformer item. Do you need / May I offer some helps on them?

I believe pylint would be a very nice thing to have :-) That will help us significantly improve Python code quality. Please feel free to open an issue to track this. Thank you so much! (I believe @k-ye may have something to say about this - he seems to be using pylint offline already.)

cpplint is good too, but I did some initial investigation and found no tool that supports C++17.

So sorry for my late reply till today! To be honest, I failed to receive the notifications both in the emails and the website from github untill checking this issue manually today

I will begin my work on the python linter soon, thanks for all your reply!

@yuanming-hu
Copy link
Member Author

Does "AMDGPU backend" means Taichi would support OpenCL?

@Headcrabed OpenCL is a different thing. AMDGPU here refers to AMD ROCm :-)

I will begin my work on the python linter soon, thanks for all your reply!

Oh actually @archibate has already done the first step in #1846 - could you help to review? Thanks! In the future, we may want to gradually improve Python code quality, partly following the messages from pylint.

@rexwangcc
Copy link
Collaborator

rexwangcc commented Sep 14, 2020

When it comes to improving the code quality, I personally think black is a better solution over yapf which provides a subset of PEP8 styles as well as lint+format functionality commands and more and more widely adopted. But looking at #592, I guess that ship has sailed 🚢 ..

@Headcrabed
Copy link

Headcrabed commented Sep 14, 2020

Does "AMDGPU backend" means Taichi would support OpenCL?

@Headcrabed OpenCL is a different thing. AMDGPU here refers to AMD ROCm :-)

Thanks for your reply, by the way have you thought about adding OpenCL support into future roadmap? Since many people still use Windows for creating artworks, I think add support for it is of importance.(ROCm hasn't supported Windows and RDNA GPUs now and AMD didn't say anything about this)

@yuanming-hu
Copy link
Member Author

@rexwangcc Does black cover the jobs of both yapf and pylint? From the README from black, it feels like a code formatted instead of static analyzer :-) Not sure what linting means in black.

Thanks for your reply, by the way have you thought about adding OpenCL support into future roadmap? Since many people still use Windows for creating artworks, I think add support for it is of importance.(ROCm hasn't supported Windows and RDNA GPUs now and AMD didn't say anything about this)

@Headcrabed Yes, OpenCL is on our roadmap, but we are waiting for a brave and passionate contributor. The OpenCL backend should generate OpenCL source, just like the OpenGL Compute Shader backend (https://github.com/taichi-dev/taichi/tree/master/taichi/backends/opengl). We are also considering Vulkan.

@rexwangcc
Copy link
Collaborator

@yuanming-hu Thinking about Taichi's requirements again and looking at the latest Black, I'd take it back...

I think Taichi's goal from the code quality perspective is to have a custom set of PEP8 rules and configure it properly so Pylint works as a static analyzer that boosts developers' velocity, while Black is designed to be an opinonated zero-config formatter that forces a pre-defined set of styles on your code base: it just provides black ./(formatter) and black ./ --check (linter, but not as a pylint equivalence) which used to work well. Since it looks like Black's code style is diverging from PEP8 now, I think we can omit that option. (please correct me if I'm wrong here)

@yuanming-hu
Copy link
Member Author

Update: we will bump to v0.7 this weekend.

@archibate
Copy link
Collaborator

Great, will we obsolete functions like ti.classkernel and ti.outer_product in this version?

@yuanming-hu
Copy link
Member Author

Yes :-)

Btw, it seems to me that ti.var no longer generates a warning. Could you please take a look into that?

@archibate
Copy link
Collaborator

Yes :-)

Btw, it seems to me that ti.var no longer generates a warning. Could you please take a look into that?

Oh, really? I can't reproduce that in latest release:

Python 3.8.5 (tags/v3.8.5:580fbb0, Jul 20 2020, 15:57:54) [MSC v.1924 64 bit (AMD64)] on win32
Type "help", "copyright", "credits" or "license()" for more information.
>>> import taichi as ti
[Taichi] mode=release
[Taichi] version 0.6.41, llvm 10.0.0, commit 706c5196, win, python 3.8.5
>>> x = ti.var(ti.f32, ())

Warning (from warnings module):
  File "<pyshell#1>", line 1
DeprecationWarning: ti.var is deprecated, please use ti.field instead
ti.var 已被废除,请使用 ti.field 来替换
>>> x
<ti.field>
>>> 

@yuanming-hu
Copy link
Member Author

A lot of thanks to everyone who has contributed to v0.7!
We will write an official post to celebrate this new minor version release.

This issue is superseded by #1989 and #1988.

@yuanming-hu yuanming-hu unpinned this issue Oct 22, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

6 participants