Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[gui] GGUI 4/n: Vulkan GUI Backend #2662

Closed
wants to merge 2 commits into from

Conversation

AmesingFlank
Copy link
Collaborator

@AmesingFlank AmesingFlank commented Aug 11, 2021

Related issue = #2646

This is the 4th of a series of PRs that adds a GPU-based GUI to taichi. This PR adds the majority of the Vulkan implementation.

Some notes regarding the organization of the code:

There is a Renderable class, which encapsulates the key resources and operations of something that can be rendered, including VBO, IBO, and a rendering pipeline. It is configured using the RenderableConfig class.

Except for the IMGUI widgets, almost every GGUI API corresponds to a subclass of Renderable. For example, Lines, Mesh, SetImage, etc. These sub classes are responsible for defining its own uniform buffers (and descriptors), as well as any additional rendering resources needed (e.g. SetImage needs a texture).

Each Renderable can be drawn on the Canvas class. The canvas caches Renderables created in previous frame, and only creates new Renderables when needed.

Regarding synchronization:

In each frame, CUDA will wait for the prev_draw_finished semaphore. Then, CUDA kernels will be launched that updates VBO/IBO/textures. Then, CUDA signals this_draw_data_ready, which is waited by Vulkan. After Vulkan finishes its graphics operations, it signals prev_draw_finished. All wait/signal operations are async with respect to the CPU.

Sadly though we're still syncing GPU-CPU during UBO update...

@AmesingFlank AmesingFlank requested review from k-ye and bobcao3 August 11, 2021 10:47
command_buffer = cached_command_buffers_[image_index];
} else {
command_buffer = create_new_command_buffer(app_context_->command_pool(),
app_context_->device());
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We should build a reuse based transient allocator for these. I have an example is here: https://github.com/bobcao3/BerkeleyGfx/blob/main/sample/2_terrain/terrain.cpp

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'll take a look.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this because we wish to avoid constantly allocating and releasing memories for cmd buffers? If we're to create an allocator for cmd buffers, should we use the same allocator for taichi's vulkan backend as well?

image_mem);

vkBindImageMemory(device, image, image_mem, 0);
}
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: use Vulkan memory allocator from the vulkan backend, and move to our Device API in the future. (This & buffer)

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I could use a bit of help regarding this. For CUDA-Vk interop, we need the corresponding VkDeviceMemory object and the offset for each buffer and image. The current memory allocator does not appear to expose these. Could you help add these to the allocator? As I'm not very familiar with vma.

@bobcao3
Copy link
Collaborator

bobcao3 commented Aug 12, 2021

Shaders are not a part of this PR right?

@bobcao3
Copy link
Collaborator

bobcao3 commented Aug 12, 2021

How are the mesh data passed in btw? Are they directly accessing the taichi field in the root buffer through some device pointers or are they copied out? If copied, do we have caching or is the copying always happening every frame? What would be the performance implications? If not, what would the user side API look like?

And then finally we should probably also look into GPU driven / indirect rendering, including primitive culling. (Helpful for millions of particles)

@bobcao3
Copy link
Collaborator

bobcao3 commented Aug 12, 2021 via email

@AmesingFlank
Copy link
Collaborator Author

ImGui stores the pointer and I think it will need to persist for it to write back the data. Not sure, it might work, I don't exactly know whether ImGUI caches value internally or relying on the pointer

On Wed, Aug 11, 2021, 11:14 PM Dunfan Lu @.> wrote: @.* commented on this pull request. ------------------------------ In taichi/ui/backends/vulkan/gui.cpp <#2662 (comment)>: > +void Gui::text(std::string text) { + ImGui::Text(text.c_str()); +} +bool Gui::checkbox(std::string name, bool old_value) { + ImGui::Checkbox(name.c_str(), &old_value); + return old_value; +} +float Gui::slider_float(std::string name, + float old_value, + float minimum, + float maximum) { + ImGui::SliderFloat(name.c_str(), &old_value, minimum, maximum); + return old_value; +} +glm::vec3 Gui::color_edit_3(std::string name, glm::vec3 old_value) { + ImGui::ColorEdit3(name.c_str(), (float *)&old_value); Could you please elaborate? I'm under the impression that it sufffices to ensure that &old_value is valid through out the call to ImGui::ColorEdit3 — You are receiving this because your review was requested. Reply to this email directly, view it on GitHub <#2662 (comment)>, or unsubscribe https://github.com/notifications/unsubscribe-auth/ACY7Q5AGZ7JXZ5ZUFSY3FA3T4NRDPANCNFSM5B6FE6QA . Triage notifications on the go with GitHub Mobile for iOS https://apps.apple.com/app/apple-store/id1477376905?ct=notification-email&mt=8&pt=524675 or Android https://play.google.com/store/apps/details?id=com.github.android&utm_campaign=notification-email .

https://github.com/ocornut/imgui/blob/c7529c8ea8ef36e344d00cb38e1493b465ce6090/imgui_widgets.cpp#L4957-L4961
Any new values are written back immediately. I think the way we do it currently is fine.

@AmesingFlank
Copy link
Collaborator Author

Shaders are not a part of this PR right?

Yep.

@AmesingFlank
Copy link
Collaborator Author

How are the mesh data passed in btw? Are they directly accessing the taichi field in the root buffer through some device pointers or are they copied out? If copied, do we have caching or is the copying always happening every frame? What would be the performance implications? If not, what would the user side API look like?

And then finally we should probably also look into GPU driven / indirect rendering, including primitive culling. (Helpful for millions of particles)

The data are not copied out. The CUDA kernels will directly access the buffers used by taichi.
For the user side API, u may look at this mpm example.
To understand how the data from taichi will be made available to ggui, look here.

Re gpu-driven/indirect rendering, I know very little about them. Maybe you'd like to make these improvements in the future?

@bobcao3
Copy link
Collaborator

bobcao3 commented Aug 12, 2021 via email

@AmesingFlank
Copy link
Collaborator Author

What's CUDA's role in GGUI?

On Wed, Aug 11, 2021, 11:41 PM Dunfan Lu @.***> wrote: How are the mesh data passed in btw? Are they directly accessing the taichi field in the root buffer through some device pointers or are they copied out? If copied, do we have caching or is the copying always happening every frame? What would be the performance implications? If not, what would the user side API look like? And then finally we should probably also look into GPU driven / indirect rendering, including primitive culling. (Helpful for millions of particles) The data are not copied out. The CUDA kernels will directly access the buffers used by taichi. For the user side API, u may look at this mpm example https://github.com/AmesingFlank/taichi/blob/31e88a5c5a6108624d616c39cbca80cec253ca74/examples/ggui_examples/mpm3d_real.py#L144-L146 . To understand how the data from taichi will be made available to ggui, look here https://github.com/AmesingFlank/taichi/blob/31e88a5c5a6108624d616c39cbca80cec253ca74/python/taichi/ui/utils.py#L34 . Re gpu-driven/indirect rendering, I know very little about them. Maybe you'd like to make these improvements in the future? — You are receiving this because your review was requested. Reply to this email directly, view it on GitHub <#2662 (comment)>, or unsubscribe https://github.com/notifications/unsubscribe-auth/ACY7Q5EFZTTVELWOG3Z2I7TT4NUKXANCNFSM5B6FE6QA . Triage notifications on the go with GitHub Mobile for iOS https://apps.apple.com/app/apple-store/id1477376905?ct=notification-email&mt=8&pt=524675 or Android https://play.google.com/store/apps/details?id=com.github.android&utm_campaign=notification-email .

If taichi is running on a CUDA backend, then GGUI will luanch its own CUDA kernels to copy these data into vulkan VBOs/IBOs/textures.

@bobcao3
Copy link
Collaborator

bobcao3 commented Aug 12, 2021 via email

@AmesingFlank
Copy link
Collaborator Author

Can these be taichi kernels? I guess we might be able to do that in the future

On Wed, Aug 11, 2021, 11:48 PM Dunfan Lu @.> wrote: What's CUDA's role in GGUI? … <#m_2484524473876931984_> On Wed, Aug 11, 2021, 11:41 PM Dunfan Lu @.> wrote: How are the mesh data passed in btw? Are they directly accessing the taichi field in the root buffer through some device pointers or are they copied out? If copied, do we have caching or is the copying always happening every frame? What would be the performance implications? If not, what would the user side API look like? And then finally we should probably also look into GPU driven / indirect rendering, including primitive culling. (Helpful for millions of particles) The data are not copied out. The CUDA kernels will directly access the buffers used by taichi. For the user side API, u may look at this mpm example https://github.com/AmesingFlank/taichi/blob/31e88a5c5a6108624d616c39cbca80cec253ca74/examples/ggui_examples/mpm3d_real.py#L144-L146 . To understand how the data from taichi will be made available to ggui, look here https://github.com/AmesingFlank/taichi/blob/31e88a5c5a6108624d616c39cbca80cec253ca74/python/taichi/ui/utils.py#L34 . Re gpu-driven/indirect rendering, I know very little about them. Maybe you'd like to make these improvements in the future? — You are receiving this because your review was requested. Reply to this email directly, view it on GitHub <#2662 (comment) <#2662 (comment)>>, or unsubscribe https://github.com/notifications/unsubscribe-auth/ACY7Q5EFZTTVELWOG3Z2I7TT4NUKXANCNFSM5B6FE6QA . Triage notifications on the go with GitHub Mobile for iOS https://apps.apple.com/app/apple-store/id1477376905?ct=notification-email&mt=8&pt=524675 or Android https://play.google.com/store/apps/details?id=com.github.android&utm_campaign=notification-email . If taichi is running on a CUDA backend, then GGUI will luanch its own CUDA kernels to copy these data into vulkan VBOs/IBOs/textures. — You are receiving this because your review was requested. Reply to this email directly, view it on GitHub <#2662 (comment)>, or unsubscribe https://github.com/notifications/unsubscribe-auth/ACY7Q5CU43FUJVYIVROBJRTT4NVEZANCNFSM5B6FE6QA . Triage notifications on the go with GitHub Mobile for iOS https://apps.apple.com/app/apple-store/id1477376905?ct=notification-email&mt=8&pt=524675 or Android https://play.google.com/store/apps/details?id=com.github.android&utm_campaign=notification-email .

Don't think that's possible right now. But in the future maybe? The taichi kernel will need access both a Cuda buffer and a Vk buffer.

@AmesingFlank
Copy link
Collaborator Author

Abolishing this PR in favor of this one

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants