-
Notifications
You must be signed in to change notification settings - Fork 2.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[gui] GGUI 4/n: Vulkan GUI Backend #2662
Conversation
command_buffer = cached_command_buffers_[image_index]; | ||
} else { | ||
command_buffer = create_new_command_buffer(app_context_->command_pool(), | ||
app_context_->device()); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We should build a reuse based transient allocator for these. I have an example is here: https://github.com/bobcao3/BerkeleyGfx/blob/main/sample/2_terrain/terrain.cpp
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'll take a look.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is this because we wish to avoid constantly allocating and releasing memories for cmd buffers? If we're to create an allocator for cmd buffers, should we use the same allocator for taichi's vulkan backend as well?
image_mem); | ||
|
||
vkBindImageMemory(device, image, image_mem, 0); | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: use Vulkan memory allocator from the vulkan backend, and move to our Device API in the future. (This & buffer)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I could use a bit of help regarding this. For CUDA-Vk interop, we need the corresponding VkDeviceMemory
object and the offset for each buffer and image. The current memory allocator does not appear to expose these. Could you help add these to the allocator? As I'm not very familiar with vma.
Shaders are not a part of this PR right? |
How are the mesh data passed in btw? Are they directly accessing the taichi field in the root buffer through some device pointers or are they copied out? If copied, do we have caching or is the copying always happening every frame? What would be the performance implications? If not, what would the user side API look like? And then finally we should probably also look into GPU driven / indirect rendering, including primitive culling. (Helpful for millions of particles) |
ImGui stores the pointer and I think it will need to persist for it to
write back the data. Not sure, it might work, I don't exactly know whether
ImGUI caches value internally or relying on the pointer
…On Wed, Aug 11, 2021, 11:14 PM Dunfan Lu ***@***.***> wrote:
***@***.**** commented on this pull request.
------------------------------
In taichi/ui/backends/vulkan/gui.cpp
<#2662 (comment)>:
> +void Gui::text(std::string text) {
+ ImGui::Text(text.c_str());
+}
+bool Gui::checkbox(std::string name, bool old_value) {
+ ImGui::Checkbox(name.c_str(), &old_value);
+ return old_value;
+}
+float Gui::slider_float(std::string name,
+ float old_value,
+ float minimum,
+ float maximum) {
+ ImGui::SliderFloat(name.c_str(), &old_value, minimum, maximum);
+ return old_value;
+}
+glm::vec3 Gui::color_edit_3(std::string name, glm::vec3 old_value) {
+ ImGui::ColorEdit3(name.c_str(), (float *)&old_value);
Could you please elaborate? I'm under the impression that it sufffices to
ensure that &old_value is valid through out the call to ImGui::ColorEdit3
—
You are receiving this because your review was requested.
Reply to this email directly, view it on GitHub
<#2662 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/ACY7Q5AGZ7JXZ5ZUFSY3FA3T4NRDPANCNFSM5B6FE6QA>
.
Triage notifications on the go with GitHub Mobile for iOS
<https://apps.apple.com/app/apple-store/id1477376905?ct=notification-email&mt=8&pt=524675>
or Android
<https://play.google.com/store/apps/details?id=com.github.android&utm_campaign=notification-email>
.
|
https://github.com/ocornut/imgui/blob/c7529c8ea8ef36e344d00cb38e1493b465ce6090/imgui_widgets.cpp#L4957-L4961 |
Yep. |
The data are not copied out. The CUDA kernels will directly access the buffers used by taichi. Re gpu-driven/indirect rendering, I know very little about them. Maybe you'd like to make these improvements in the future? |
What's CUDA's role in GGUI?
…On Wed, Aug 11, 2021, 11:41 PM Dunfan Lu ***@***.***> wrote:
How are the mesh data passed in btw? Are they directly accessing the
taichi field in the root buffer through some device pointers or are they
copied out? If copied, do we have caching or is the copying always
happening every frame? What would be the performance implications? If not,
what would the user side API look like?
And then finally we should probably also look into GPU driven / indirect
rendering, including primitive culling. (Helpful for millions of particles)
The data are not copied out. The CUDA kernels will directly access the
buffers used by taichi.
For the user side API, u may look at this mpm example
<https://github.com/AmesingFlank/taichi/blob/31e88a5c5a6108624d616c39cbca80cec253ca74/examples/ggui_examples/mpm3d_real.py#L144-L146>
.
To understand how the data from taichi will be made available to ggui,
look here
<https://github.com/AmesingFlank/taichi/blob/31e88a5c5a6108624d616c39cbca80cec253ca74/python/taichi/ui/utils.py#L34>
.
Re gpu-driven/indirect rendering, I know very little about them. Maybe
you'd like to make these improvements in the future?
—
You are receiving this because your review was requested.
Reply to this email directly, view it on GitHub
<#2662 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/ACY7Q5EFZTTVELWOG3Z2I7TT4NUKXANCNFSM5B6FE6QA>
.
Triage notifications on the go with GitHub Mobile for iOS
<https://apps.apple.com/app/apple-store/id1477376905?ct=notification-email&mt=8&pt=524675>
or Android
<https://play.google.com/store/apps/details?id=com.github.android&utm_campaign=notification-email>
.
|
If taichi is running on a CUDA backend, then GGUI will luanch its own CUDA kernels to copy these data into vulkan VBOs/IBOs/textures. |
Can these be taichi kernels? I guess we might be able to do that in the
future
…On Wed, Aug 11, 2021, 11:48 PM Dunfan Lu ***@***.***> wrote:
What's CUDA's role in GGUI?
… <#m_2484524473876931984_>
On Wed, Aug 11, 2021, 11:41 PM Dunfan Lu *@*.***> wrote: How are the mesh
data passed in btw? Are they directly accessing the taichi field in the
root buffer through some device pointers or are they copied out? If copied,
do we have caching or is the copying always happening every frame? What
would be the performance implications? If not, what would the user side API
look like? And then finally we should probably also look into GPU driven /
indirect rendering, including primitive culling. (Helpful for millions of
particles) The data are not copied out. The CUDA kernels will directly
access the buffers used by taichi. For the user side API, u may look at
this mpm example
https://github.com/AmesingFlank/taichi/blob/31e88a5c5a6108624d616c39cbca80cec253ca74/examples/ggui_examples/mpm3d_real.py#L144-L146
. To understand how the data from taichi will be made available to ggui,
look here
https://github.com/AmesingFlank/taichi/blob/31e88a5c5a6108624d616c39cbca80cec253ca74/python/taichi/ui/utils.py#L34
. Re gpu-driven/indirect rendering, I know very little about them. Maybe
you'd like to make these improvements in the future? — You are receiving
this because your review was requested. Reply to this email directly, view
it on GitHub <#2662 (comment)
<#2662 (comment)>>,
or unsubscribe
https://github.com/notifications/unsubscribe-auth/ACY7Q5EFZTTVELWOG3Z2I7TT4NUKXANCNFSM5B6FE6QA
. Triage notifications on the go with GitHub Mobile for iOS
https://apps.apple.com/app/apple-store/id1477376905?ct=notification-email&mt=8&pt=524675
or Android
https://play.google.com/store/apps/details?id=com.github.android&utm_campaign=notification-email
.
If taichi is running on a CUDA backend, then GGUI will luanch its own CUDA
kernels to copy these data into vulkan VBOs/IBOs/textures.
—
You are receiving this because your review was requested.
Reply to this email directly, view it on GitHub
<#2662 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/ACY7Q5CU43FUJVYIVROBJRTT4NVEZANCNFSM5B6FE6QA>
.
Triage notifications on the go with GitHub Mobile for iOS
<https://apps.apple.com/app/apple-store/id1477376905?ct=notification-email&mt=8&pt=524675>
or Android
<https://play.google.com/store/apps/details?id=com.github.android&utm_campaign=notification-email>
.
|
Don't think that's possible right now. But in the future maybe? The taichi kernel will need access both a Cuda buffer and a Vk buffer. |
Abolishing this PR in favor of this one |
Related issue = #2646
This is the 4th of a series of PRs that adds a GPU-based GUI to taichi. This PR adds the majority of the Vulkan implementation.
Some notes regarding the organization of the code:
There is a
Renderable
class, which encapsulates the key resources and operations of something that can be rendered, including VBO, IBO, and a rendering pipeline. It is configured using theRenderableConfig
class.Except for the IMGUI widgets, almost every GGUI API corresponds to a subclass of
Renderable
. For example,Lines
,Mesh
,SetImage
, etc. These sub classes are responsible for defining its own uniform buffers (and descriptors), as well as any additional rendering resources needed (e.g.SetImage
needs a texture).Each
Renderable
can be drawn on theCanvas
class. The canvas caches Renderables created in previous frame, and only creates newRenderable
s when needed.Regarding synchronization:
In each frame, CUDA will wait for the
prev_draw_finished
semaphore. Then, CUDA kernels will be launched that updates VBO/IBO/textures. Then, CUDA signalsthis_draw_data_ready
, which is waited by Vulkan. After Vulkan finishes its graphics operations, it signalsprev_draw_finished
. All wait/signal operations are async with respect to the CPU.Sadly though we're still syncing GPU-CPU during UBO update...