Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Question: How are the shaders included in the scenes? #1

Closed
SephReed opened this issue Nov 22, 2024 · 11 comments
Closed

Question: How are the shaders included in the scenes? #1

SephReed opened this issue Nov 22, 2024 · 11 comments

Comments

@SephReed
Copy link

[ext_resource type="Shader" path="res://shaders/monolithic/opaque_shader.gdshader" id="3_xyfda"]

I see in the source code that the monolithic scene references its shaders, but I can't seem to figure out where in the editor you did that.

@SephReed
Copy link
Author

Found where the shaders are specified. They seem to be applied to each object. This seems resource intensive to me.

@thompsop1sou
Copy link
Owner

thompsop1sou commented Nov 22, 2024

Found where the shaders are specified. They seem to be applied to each object. This seems resource intensive to me.

Yes, it looks like this in the editor:

image

When you say "resource intensive", what are you referring to exactly?

If you mean it's not optimized for rendering, that definitely might be true. I don't know all the details of how Godot's rendering pipeline is (or is not) optimized, so I don't know the best way to create these for that purpose. If you have any suggestions, I'd be interested in hearing them!

If you mean that you end up with a lot of different materials, that's true. But you could alleviate that by creating and saving a single material and then just copying that to the different meshes as needed. Godot has per-instance uniforms in 3D, so that would allow you to use the same material but have different parameters for the different instances.

@SephReed
Copy link
Author

Oh, I mean in terms of rendering pipeline. The reason I'm trying to screen a screen shader is so it can only modify at the end of the pipeline, rather than applying to every object independently. Efficiency is my main concern.

In terms of suggestions, I have no clue! I keep searching and searching, but finding nothing. It seems like screen_shaders in general are not prioritized, likely because there's no way to flatten the normals of a pixel containing a transparent object. Either you get the normal of that object, or the normal of the object behind it.

At the same time, it seems like there should be a way to apply a shader to an entire scenes opaque objects, then apply that same shader to each transparency layer added over top. That would be the most efficient way, I think.

@thompsop1sou
Copy link
Owner

thompsop1sou commented Nov 22, 2024

Oh, i think i see what you're saying. In that case, i think you might be looking for screen_shader.gdshader. This is the post-processing shader which applies an effect after everything else is rendered. In this shader, you have access to what has been rendered to the screen using the color_texture uniform. You also have access to the depth and normal values in screen space in the depth_texture and normal_texture. Using those textures, you can recreate, for each pixel, it's global location and which direction it's facing. With this info, you can do stuff like add fog, outlines, DOF blur, or other screen space effects.

All the other shaders in the project are just object shaders. AFAIK in 3D, every mesh has to have a shader of some kind in order to be rendered at all. If you use StandardMaterial3D, that gets translated to a shader on the backend before it is rendered. The object shaders that i provide are different from StandardMaterial3D in that they render differently based on the layer that is being rendered. That's what allows them to send color info on one layer, depth info on the next, and normal info on the last layer.

Does that help at all? Let me know if you're looking for something else 🙂

@SephReed
Copy link
Author

It helps me see where the confusion is. I think I can be more specific:

Currently, the setup is something like:

  • there a four camera
  • each camera runs a different shader on the entirety of each object
  • each shader reduces the cameras information down to a single albedo color output
  • for each camera/shader, every object is rendered then layered with each other to create the final output
  • those four+ final outputs are simultaneously accessible from screen_shader
  • in total: shader passes = (four cameras * each object) + one last post-process

I suppose the big redundancy is that each camera has to render all the same information, but then throws most of it away. This would make the render 4x slower.

I don't know the pipeline well enough, but it seems like there should be a way to get one shader to output more than one buffer.

@thompsop1sou
Copy link
Owner

thompsop1sou commented Nov 22, 2024

The cameras should only render the info for their own layer, whether that's the color, depth, or normal layer. For example, the fragment() function in object_shader.gdshaderinc looks like this:

// Called once for every pixel.
void fragment()
{
	camera_layers = CAMERA_VISIBLE_LAYERS;

	// Use emission to render the depth value
	if (camera_layers == DEPTH_LAYER)
	{
		ALBEDO = vec3(0.0);
		float depth = transform_position(PROJECTION_MATRIX, VERTEX).z;
		EMISSION.x = depth;
		EMISSION.y = fract(depth * 256.0);
		EMISSION.z = 0.0; // Could add other info in this channel if desired...
	}
	// Use emission to render the normal value
	else if (camera_layers == NORMAL_LAYER)
	{
		ALBEDO = vec3(0.0);
		EMISSION = 0.5 * (NORMAL + vec3(1.0));
	}
	// Render albedo (and optionally alpha)
	else
	{
		ALBEDO = color.rgb;
#ifndef OPAQUE
		ALPHA = color.a;
#endif
		// Since the fragment() function has to be overridden, you may also want to look up how to
		// render out other common properties, like roughness, emission, metallic, etc...
	}
}

The if... else if... else... statement is what makes sure only the correct data is written on each layer. Again, i'm not very familiar with the rendering pipeline, but i think that this code is run once per camera (per pixel). So, as a camera runs through this code, it will only run the branch corresponding to its layer. So the depth camera will only run the first branch, the normal camera will only run the second branch, and the color camera will only run the last branch.

I don't know the pipeline well enough, but it seems like there should be a way to get one shader to output more than one buffer.

Regarding this, i think that is totally possible with regular GLSL. It's just not possible with Godot shader code. But i think they are planning to add that functionality with the rendering compositor that they are currently working on.

This project is a workaround to allow you to do it with Godot shader code, in a manner of speaking. However, there are definitely suboptimal parts about it. For example, each buffer has to have the same format, based on the format for Godot's viewports. So if all you needed was a stencil buffer (one channel of one bit), you'd still have to use a viewport texture (four channels of eight bits, i think).

@SephReed
Copy link
Author

It seems what I'm talking about is MRT: godotengine/godot-proposals#495

And I guess my concern is whether or not the if... else if... else... is enough to stop the camera from generating that data. I suspect that each camera does the same amount of work before the shaders, and then the shaders narrow the data.

@thompsop1sou
Copy link
Owner

Yes, in the "monolithic" approach, you may be right that each camera ends up going through each branch before throwing away the ones it doesn't need. I think that would depend on how shaders are compiled and optimized.

I think the "modular" approach may be better in this regard, especially if you cull/discard in the vertex() function. The vertex() function of every shader will run for every camera, but that should usually be much cheaper than running the fragment() function. And then the fragment() function will only run on the cameras where the material hasn't already been culled/discarded.

@thompsop1sou
Copy link
Owner

In terms of suggestions, I have no clue! I keep searching and searching, but finding nothing. It seems like screen_shaders in general are not prioritized, likely because there's no way to flatten the normals of a pixel containing a transparent object. Either you get the normal of that object, or the normal of the object behind it.

At the same time, it seems like there should be a way to apply a shader to an entire scenes opaque objects, then apply that same shader to each transparency layer added over top. That would be the most efficient way, I think.

I read through your previous comment more closely and noticed that you were asking about applying a shader to each layer of transparency. Sorry i missed that at first. I'm not sure how you would do that, either with Godot's current tools or with this project.

I think you might actually have to have a separate set of buffers for each transparent object, which ends up being a lot of buffers. That wouldn't be very practical with this project, since that would require counting the transparent objects in a scene and adding new cameras/viewports for each object. But there may be a low-level way to do that using compute shaders. Maybe, each frame, run a custom compute shader on every transparent object, which renders out a custom buffer of the object (normals, depth, or whatever is needed). Then pass that buffer to your post-processing shader. I don't know all the details of how that would work, but it might be worth looking into if you need the depth/normal info for each transparent layer.

@SephReed
Copy link
Author

Hmmm... I really hope that cameras are designed to do the minimum amount of work needed to supply their shaders. If that's the case, the modular one should be much more efficient than the monolith one.

As for the transparency thing, I think I'm leaving it at "I don't know" for now. Going to model for a bit and see how I feel tomorrow.

Never would have thought it would be so hard!

@thompsop1sou
Copy link
Owner

I'm going to close this issue, since i think your original question was answered. If in the future you have more questions about this project, i'm happy to answer those in an issue like this one. If you had more general questions about Godot, probably one of Godot's communities would be a better location to ask those.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants