Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Screen shaders: not applying to sprite 3d, strange culling, strange interactions with preview grids #99491

Closed
SephReed opened this issue Nov 21, 2024 · 12 comments
Labels

Comments

@SephReed
Copy link

Tested versions

v4.3.stable.official [77dcf97]

System information

Godot v4.3.stable - macOS 14.0.0 - GLES3 (Compatibility) - Apple M1 Pro - Apple M1 Pro (10 Threads)

Issue description

2024-11-21.18-30-10.mp4

3D Scene with a Sprite3D and MeshInstance3D set to full screen quad with shader.

Bugs include:

  • not affecting the Sprite3D
  • culling the Sprite3D at some angles and positions
  • culling the grid lines in the preview at some angles and positions

Steps to reproduce

  • Scene3D
  • Add Sprite3D, set to logo texture
  • Add MeshInstance3D
    • Set to Quad
    • Set height and width to 2m
    • Flip faces on
    • Add Material Shader
    • Create Shader
shader_type spatial;
render_mode unshaded, fog_disabled;

uniform sampler2D screen_texture : source_color, hint_screen_texture;

void vertex() {
	POSITION = vec4(VERTEX.xy, 1.0, 1.0);
}

void fragment() {
	vec4 screen = texture(screen_texture, SCREEN_UV);
	ALBEDO.rgb = vec3(screen.r, 0.0, 1.0);
}

Navigate the preview window, notice things being culled, notice the sprite3d never being purpleized

Minimal reproduction project (MRP)

screen_shader.zip

@SephReed
Copy link
Author

@tetrapod00
Copy link
Contributor

tetrapod00 commented Nov 21, 2024

Edit: Hello to any future readers! If you were linked here from a comment in the official docs, be aware that while this info is generally accurate, it's a bit simplified and might not be technically correct in all aspects. Also, by the time you read this, it may be out of date.


Unfortunately, I don't think any of these behaviors are bugs. These are pretty common limitations with this postprocessing method, and they arise from interactions between different rendering systems in Godot.

Consider these facts about how Godot implements certain things:

  • Sprite3D is a transparent mesh.
  • The quad you are using for postprocessing is a transparent mesh.
  • Transparent meshes are sorted in-order, based on the distance to the screen (roughly)
  • The various screen textures (hint_screen_texture, hint_depth_texture, hint_normal_roughness_texture) are all created once, after all opaque objects are rendered and before all transparent objects are rendered. So transparent objects can use the screen textures, but cannot be in the screen textures.
  • The grid and axis lines in the scene are (to the best of my knowledge) just meshes that are drawn along with everything else. I guess they are also drawn in the "transparent queue" as well.

Combine those facts and I believe all the behavior can be explained:

  • In general, Sprite3Ds with transparency enabled do not show up in the screen textures, and since this shader reads from the screen texture then outputs to the whole scree, Sprite3Ds don't show up.
  • The strange "culling" behavior is due to the fact that sometimes the transparent meshes are sorted in a way where the Sprite3D shows up in front of the postprocessing quad.
  • Similarly with the grid lines. I think those are sometimes rendered in front of the postprocessing quad, sometimes not. This one is the closest thing to something that could be considered a bug here, but it would be a bug in the implementation of the axis/grid lines. I do think the current approach of grid and axis lines just being meshes in the scene is not ideal, because they also end up affected by the built-in postprocessing too. The problems with this are already tracked in a couple places: Gizmos aren't Z-sorted properly with transparent surfaces #9935 at least, if I find the other issues I'll add them here.

In your case, I think the easiest workaround is to set Alpha Cut to Discard in your Sprite3D. This will make the Sprite3D render either fully transparent or fully opaque pixels, and go in the opaque queue, so it shows up in the screen texture.

I think in some cases you can also work around this by setting the VisualInstance3D > Sorting > Offset of the quad to a large negative number. This will render most transparent objects over the postprocessing quad, so while they won't be affected by the postprocess, they will show up in the scene.

You could also consider using a CompositorEffect, which does have the ability to create postprocessing that works well with transparency.

These limitations are already somewhat documented but they could perhaps be made more clear on the Advanced Postprocessing page itself.

(Also, I don't think that the issue you linked is related - these limitations are in effect with perspective cameras too. There are some issues with orthographic camera in shaders but they're not related to this issue IMO)

@SephReed
Copy link
Author

SephReed commented Nov 22, 2024

Firstly, thank you so much for the detailed explanation. The relief of hearing from someone knowledgable is a gift. And you nailed it: Transparency is the caveat I was missing, and it makes perfect sense.

That being said, why is this the recommended post processing method? I thought it was strange to have a quad in the scene at all, when all I really want is to take the buffer from the camera, and put it through a shader.

You mentioned "this postprocessing method," but it's the only one I've been able to find documented or in any tutorials online. Is there another method that simply adds a shader layer after the render stage?

@tetrapod00
Copy link
Contributor

why is this the recommended post processing method?

At the time that article was originally written, and up until the Compositor was implemented in 4.3, Godot really did not have a dedicated custom postprocessing solution. The method documented in "Advanced Postprocessing" is something of a hack.

These days the most robust way to implement a custom postprocessing would probably be a compositor effect, but you need to understand lower-level GLSL and rendering code. There's also a good collection of resources here. I believe that there are vague plans to implement a friendlier abstraction for custom postprocessing that uses the compositor but doesn't require as much boilerplate, but I don't think they are concrete yet. The official compositor tutorial implements something like that, since it lets you swap the GLSL code on demand.

@SephReed
Copy link
Author

Woah, really!? My intuition was wayyy off here. I went into this thinking "yup, there's some pixels, I'll just slap a shader on it. Easy peasy."

I've seen some suggestions to double camera it. Have a camera render to a 2d viewport, then add shaders to that.

@tetrapod00
Copy link
Contributor

tetrapod00 commented Nov 22, 2024

Oh, if you only need the color texture and not the depth or normal+roughness textures, you can also do postprocessing after the 3D scene is rendered, using a ColorRect control node. That way is documented here https://docs.godotengine.org/en/stable/tutorials/shaders/custom_postprocessing.html

And yeah, there are also some tricks you can do with viewports in some cases, to do "compositing" without the compositor.

@SephReed
Copy link
Author

SephReed commented Nov 22, 2024

Okay. So, screen shader options are:

  1. Quad mesh - not great for anything with transparency
  2. ColorRect control node - can only do texture albedo
  3. Viewports - same as option 2 but with more steps?
  4. Compositioner - only works on Forward+

You said that option 2 doesn't allow depth, normal, or roughness. It seems like I should be able to render the scene with normals if I'm able to render it with colors. Same for depth and roughness. Is it more that it would be inefficient because each would require a separate render of the scene?

@tetrapod00
Copy link
Contributor

Option 2 (ColorRect) can't use depth, normal, or roughness because the data is no longer available at that point in the rendering process. For workarounds, you might find this comment and the linked project helpful. They explain it better than I can.

@SephReed
Copy link
Author

Perfect link. Thank you!

@tetrapod00
Copy link
Contributor

And for the record I think that the current state of postprocessing docs can be much improved, I would really like some page that compares the pros, cons, and limitations of each approach, especially now that the Compositor is implemented. It's just a large piece of work to to that.

@SephReed
Copy link
Author

I think you're very right on this one. With each step I take forwards, it seems I find myself even further confused as to how anything gets done efficiently.

My suspicion is that most people are applying shaders to individual objects and moving on. I don't know what the performance cost is of adding a grayscale shader to every object in a scene, rather than doing that in post-process. I would assume negligible for that type of shader, but that it adds up very quickly for things that are more complex.

@SephReed
Copy link
Author

Another related discussion:

thompsop1sou/custom-screen-buffers#1

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

2 participants