Replies: 4 comments 5 replies
-
Thank you for writing this! I've wanted to make a blog post covering all of this ever since I started contributing to bevy's renderer. I'd like to add a bit more details for anyone finding this. Nodes can technically run any arbitrary code or even multiple render passes or just compute pass without any rendering. The only restriction they have is that they only have read only access to the ECS. We've talked about making Nodes just bevy systems to reduce the amount of different apis required, but there are some limitations with this approach that make this not ideal. Here's what cart had to say on this:
In other words, part of the design is intentional but not finished so we have to deal with a lot of complexity while not currently benefitting from the potential gains. |
Beta Was this translation helpful? Give feedback.
-
This is off the top of my head so there may be mistakes: In my mind there are separate flows for assets and entities. Assets like Mesh and materials are extracted from the main world when they are added/modified/removed. In the PrepareAssets set, the are transformed into their prepared form, eg GpuMesh for a Mesh asset, or a PreparedMaterial for a Material. Material preparation involves bind group creation, which is where as_bind_group() is called. Mesh entities are extracted in extract_meshes. This involves extracting mesh transforms, flags, and the mesh handle. Separately the material handle is extracted. Then, in the Queue set, if the assets have been prepared, they are looked up and various properties of them are used to create a specialised render pipeline. Specialisation is done based on things like the required mesh vertex attributes and their layout, flags in the material key that might impact the layout of the bind group (eg layout of material bindings), view level things like MSAA, and so on. The entity is queued to the appropriate render phase (if it’s a 2D mesh, Transparent2d; if it’s a 3D mesh, then it could be Opaque3d, AlphaMask3d, or Transparent3d, depending on the AlphaMode of the material.) In the Sort set, the render phases are sorted into draw order by the phase item sort keys, for example back to front for transparent phases so that alpha blending works correctly. In Prepare we now have two subsets: PrepareResources and PrepareBindGroups. In PrepareResources the new batch_and_prepare_render_phase system is run. Its job is to gather information about the phase items (entities) in draw order, prepare the per-entity data (the mesh transforms and flags), and if possible, merge the phase items to result in instanced draws of multiple mesh entities with one draw command when the draw commands are encoded later. At the end of the system, the command to write the buffer of mesh transforms to VRAM is queued. After the data has been prepared into buffers, bind groups are (re)created. In the Render set, the render graphs are run per corresponding view. So the 3D graph is run for 3D views such as shadow mapping views or 3D cameras. The graphs are directed cyclic graphs of render graph nodes. They define ordered execution of render passes. For 3D this is broadly speaking prepasses (to render depth, normals, motion vectors, used in later passes), the main lighting passes (opaque, alpha mask, transparent), and then post processing passes (bloom, TAA, …). If we focused on meshes, and take the simple case of not having any prepasses, the main 3D pass nodes are run. They first process the Opaque3d render phase, creating a TrackedRenderPass, and iterating through the items in the phase. For each render phase item, its DrawFunction (an ordered tuple of RenderCommands) is executed. Each RenderCommand can query for resources or components from the view and/or mesh entities that it can use to then encode the draw command by calling functions on the TrackedRenderPass to set the pipeline, bind groups, index/vertex buffers, and finally call draw, saying which vertices from the bound index/vertex buffers to draw, and what range of instances. |
Beta Was this translation helpful? Give feedback.
-
Ive been looking into Ray Tracing and need to implement my own, while wgpu completely lacks support for it ive found you can do it within a compute shader. What im having trouble understanding is the purpose of all this render graph stuff in regards to the pipeline. When looking at the pipeline for rasterization and how they are explained else where, i don't really understand whats going on, is the graph just a massive optimization method or something? Idk it just feels so disconnected from the pipeline structure that's described by lower level rendering apis that I'm left confused on what each thing actually does so I'm struggling to know where i can even start to replace the current rendering system with the compute shader. |
Beta Was this translation helpful? Give feedback.
-
Wait the vertex buffer and index buffers are set for each mesh independently? I thought bevy did some advanced stuff to combine all the vertices into one buffer. But I guess that's not necessary? How efficient is this? |
Beta Was this translation helpful? Give feedback.
-
This is a guide exploring step-by-step how a
Mesh
goes from aHandle<Mesh>
to pixels on screen. It might be useful to people who want to implement their own rendering on top of bevy's render graph. It started as an answer to #9036, but it answers more general questions.Asset
, they are passed to the render world through theirRenderAsset
dark worldrender world counterpart. What is accessed by the render world is not aMesh
, but aGpuMesh
, the transformation is done at the extraction stage.pass.draw_indexed(…)
is where the rendering takes placebevy/crates/bevy_pbr/src/render/mesh.rs
Lines 1251 to 1269 in 9478432
DrawMesh
: We have the code, but it needs to be ran, where is it ran? Well, first we makeDrawMesh
part of a larger render command:bevy/crates/bevy_pbr/src/material.rs
Lines 342 to 348 in 9478432
RenderGraph
. A graph is made ofNode
s, nodes setup "render pass" for wgpu and render aRenderPhase<I>
for a specificI
that implementsPhaseItem
. What's the relationship withDrawMaterial
? Well,DrawMaterial
is aRenderCommand
, which is associated with a specificPhaseItem
. Here we associateDrawMaterial
with three differentPhaseItem
s:Transparent3d
,Opaque3d
andAlphaMask3d
. More on this later.bevy/crates/bevy_pbr/src/material.rs
Lines 194 to 199 in 9478432
add_render_command
wraps theRenderCommand
in aRenderCommandState
and add it as aBox<dyn Draw<I>>
to theDrawFunctions<I>
resource:bevy/crates/bevy_render/src/render_phase/draw.rs
Lines 293 to 305 in 0181d40
RenderGraph
. bevy defines a set of default nodes inbevy_core_pipeline
, one of which isMainOpaquePass3dNode
.bevy/crates/bevy_core_pipeline/src/core_3d/main_opaque_pass_3d_node.rs
Line 25 in 9478432
ViewNode
: create a render pass: There is a lot of shared boilerplate between differentNode
s. So bevy defines a wrapper typeViewNodeRunner
that does the shared boilerplate. The unique work is implemented in theViewNode
trait implementation. Generally, nodes create render passes and execute them. Render passes are a webGPU concept, I'm not quite familiar with it.bevy/crates/bevy_core_pipeline/src/core_3d/main_opaque_pass_3d_node.rs
Lines 68 to 102 in 9478432
ViewNode
: callPhaseItem::render
onRenderPhase
:RenderPhase
is a component. They are added to the render-world equivalent of cameras. Each view may have severalRenderPhase
s of different types.RenderPhase
is a collection ofPhaseItem
s, according to the docs, eachPhaseItem
is an "renderable" entity that is visible from the camera, one needs to add them manually inqueue
systems.bevy/crates/bevy_core_pipeline/src/core_3d/main_opaque_pass_3d_node.rs
Lines 110 to 111 in 9478432
PhaseItem::render
, call thedraw
method on all the render phase items: This is complicated but we are almost at the end. Rememberadd_render_command
in point (4)? This is whereDrawFunctions<I>
becomes important! We will execute theDraw<I>::draw
for all items inDrawFunctions<I>
:bevy/crates/bevy_render/src/render_phase/mod.rs
Lines 77 to 91 in 9478432
Now we can close the loop. Remember the code in point (2)? We asked in point (3) where is it ran. It's here! The implementation of
Draw<I>::draw
forRenderCommandState
is nothing more than a single call toRenderCommand::render
, the code in point (2)!bevy/crates/bevy_render/src/render_phase/draw.rs
Lines 260 to 272 in 9478432
Beta Was this translation helpful? Give feedback.
All reactions