-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Question: How are the shaders included in the scenes? #1
Comments
Found where the shaders are specified. They seem to be applied to each object. This seems resource intensive to me. |
Yes, it looks like this in the editor: When you say "resource intensive", what are you referring to exactly? If you mean it's not optimized for rendering, that definitely might be true. I don't know all the details of how Godot's rendering pipeline is (or is not) optimized, so I don't know the best way to create these for that purpose. If you have any suggestions, I'd be interested in hearing them! If you mean that you end up with a lot of different materials, that's true. But you could alleviate that by creating and saving a single material and then just copying that to the different meshes as needed. Godot has per-instance uniforms in 3D, so that would allow you to use the same material but have different parameters for the different instances. |
Oh, I mean in terms of rendering pipeline. The reason I'm trying to screen a screen shader is so it can only modify at the end of the pipeline, rather than applying to every object independently. Efficiency is my main concern. In terms of suggestions, I have no clue! I keep searching and searching, but finding nothing. It seems like screen_shaders in general are not prioritized, likely because there's no way to flatten the normals of a pixel containing a transparent object. Either you get the normal of that object, or the normal of the object behind it. At the same time, it seems like there should be a way to apply a shader to an entire scenes opaque objects, then apply that same shader to each transparency layer added over top. That would be the most efficient way, I think. |
Oh, i think i see what you're saying. In that case, i think you might be looking for screen_shader.gdshader. This is the post-processing shader which applies an effect after everything else is rendered. In this shader, you have access to what has been rendered to the screen using the All the other shaders in the project are just object shaders. AFAIK in 3D, every mesh has to have a shader of some kind in order to be rendered at all. If you use Does that help at all? Let me know if you're looking for something else 🙂 |
It helps me see where the confusion is. I think I can be more specific: Currently, the setup is something like:
I suppose the big redundancy is that each camera has to render all the same information, but then throws most of it away. This would make the render 4x slower. I don't know the pipeline well enough, but it seems like there should be a way to get one shader to output more than one buffer. |
The cameras should only render the info for their own layer, whether that's the color, depth, or normal layer. For example, the // Called once for every pixel.
void fragment()
{
camera_layers = CAMERA_VISIBLE_LAYERS;
// Use emission to render the depth value
if (camera_layers == DEPTH_LAYER)
{
ALBEDO = vec3(0.0);
float depth = transform_position(PROJECTION_MATRIX, VERTEX).z;
EMISSION.x = depth;
EMISSION.y = fract(depth * 256.0);
EMISSION.z = 0.0; // Could add other info in this channel if desired...
}
// Use emission to render the normal value
else if (camera_layers == NORMAL_LAYER)
{
ALBEDO = vec3(0.0);
EMISSION = 0.5 * (NORMAL + vec3(1.0));
}
// Render albedo (and optionally alpha)
else
{
ALBEDO = color.rgb;
#ifndef OPAQUE
ALPHA = color.a;
#endif
// Since the fragment() function has to be overridden, you may also want to look up how to
// render out other common properties, like roughness, emission, metallic, etc...
}
} The
Regarding this, i think that is totally possible with regular GLSL. It's just not possible with Godot shader code. But i think they are planning to add that functionality with the rendering compositor that they are currently working on. This project is a workaround to allow you to do it with Godot shader code, in a manner of speaking. However, there are definitely suboptimal parts about it. For example, each buffer has to have the same format, based on the format for Godot's viewports. So if all you needed was a stencil buffer (one channel of one bit), you'd still have to use a viewport texture (four channels of eight bits, i think). |
It seems what I'm talking about is MRT: godotengine/godot-proposals#495 And I guess my concern is whether or not the |
Yes, in the "monolithic" approach, you may be right that each camera ends up going through each branch before throwing away the ones it doesn't need. I think that would depend on how shaders are compiled and optimized. I think the "modular" approach may be better in this regard, especially if you cull/discard in the |
I read through your previous comment more closely and noticed that you were asking about applying a shader to each layer of transparency. Sorry i missed that at first. I'm not sure how you would do that, either with Godot's current tools or with this project. I think you might actually have to have a separate set of buffers for each transparent object, which ends up being a lot of buffers. That wouldn't be very practical with this project, since that would require counting the transparent objects in a scene and adding new cameras/viewports for each object. But there may be a low-level way to do that using compute shaders. Maybe, each frame, run a custom compute shader on every transparent object, which renders out a custom buffer of the object (normals, depth, or whatever is needed). Then pass that buffer to your post-processing shader. I don't know all the details of how that would work, but it might be worth looking into if you need the depth/normal info for each transparent layer. |
Hmmm... I really hope that cameras are designed to do the minimum amount of work needed to supply their shaders. If that's the case, the modular one should be much more efficient than the monolith one. As for the transparency thing, I think I'm leaving it at "I don't know" for now. Going to model for a bit and see how I feel tomorrow. Never would have thought it would be so hard! |
I'm going to close this issue, since i think your original question was answered. If in the future you have more questions about this project, i'm happy to answer those in an issue like this one. If you had more general questions about Godot, probably one of Godot's communities would be a better location to ask those. |
custom-screen-buffers/monolithic_test_scene.tscn
Line 5 in 82775b6
I see in the source code that the monolithic scene references its shaders, but I can't seem to figure out where in the editor you did that.
The text was updated successfully, but these errors were encountered: