-
Notifications
You must be signed in to change notification settings - Fork 2.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Depth scale cleanup #16948
Depth scale cleanup #16948
Conversation
832df9c
to
c926a6f
Compare
Anyway, what I'm thinking is:
|
Haven't looked yet but quickly - USE_ACCURATE_DEPTH is in contrast to using old-style GL depth. So:
We can change its meaning, but that's the only thing accurate depth CURRENTLY means in the code. Additionally:
Maybe we should more clearly separate the "clamp simulation mode" as a sub-option to accurate depth. I feel like your pull description is conflating the two concepts. To me it's a hierarchy. Accurate depth, at least historically, just meant "don't use the wrong old depth." It'd be a confusing name for the subset method itself, because obviously DEPTH_CLAMP is the more accurate between those two (well, arguably, maybe 24-to-16 is more accurate in some ways... either way, it's the least accurate of those three options, if we're using that word.) -[Unknown] |
Right, I think I'm indeed getting it wrong, and the code is a little confused too, both the original and after my changes. I really don't want to change the meaning of ACCURATE_DEPTH, that's accidental - I did misunderstand what it means, it seems. The idea with writing the test is to challenge my assumptions and force us to get this right. Let's get this stuff fully cleared up once and for all in the code and in the naming of things, and let's centralize any other depth value conversions that are floating around in the code to use |
Yeah I understand the flag/mode hiearchy now, thanks! I will fix up this PR accordingly tomorrow, and it will involve mostly commenting. Then after this is in, I'll centralize the remaining similar depth calculations, and finally, I'll bring back #16947 which will make all the shader uses of these factors dynamic. |
Oh, one more question - do you know if there's a reason we don't remove GPU_USE_DEPTH_CLAMP when we activate GPU_SCALE_DEPTH_FROM_24BIT_TO_16BIT ? Because there we do have a contradiction of sorts. Though I guess it can still clamp some wildly out of range values in theory that would otherwise be missed, however unlikely that is... Anyway, in the depth calculation, GPU_SCALE_DEPTH_FROM_24BIT_TO_16BIT should obviously take precedence over GPU_USE_DEPTH_CLAMP. Also this one I'm having a hard time figuring out, in the various shader managers:
Do we count depth in pixels? I don't remember where 1/256th of the depth range comes into the picture. When converti |
… add expected values)
c926a6f
to
28a7912
Compare
Eh, went ahead and did it now already :) There was no bug to fix, just a bit of a misunderstanding, now corrected and commented to avoid it re-appearing in my head. I'm keeping the change in Next step will be to fix the scale factor to make GetDepthScaleFactors just as accurate as ToScaledDepthFromIntegerScale, and then replace the latter with the former. |
#8454 - I think this was copy pasta'd into D3D 11 and Vulkan and I don't know that it's correct there. Specifically see this comment: -[Unknown] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The armips revert looks accidental.
Don't remember why I didn't change Apply with ApplyReverse, I feel like that was me...
-[Unknown]
This tests that some depth related functions match each other better, and makes them purer (useflags are passed in as a parameter now, for ease of testing).
Later, I intend to make GetDepthScaleFactors the only way to get these, with the returned DepthScaleFactors object giving you all the needed functionality.
There's nothing here that changes behavior, only some renaming, except that calling DepthScaleFactor on its own now respects GPU_ACCURATE_DEPTH.
However, the new unit test fails when DEPTH_CLAMP is set, so I think we have some problems... Also, we always set DEPTH_CLAMP and ACCURATE_DEPTH together in GPUCommon, which seems like a contradiction - DEPTH_CLAMP seems to mean that we always use the full range 0-1 in depth buffers, while ACCURATE_DEPTH seems to mean that we don't... I'm a little confused.EDIT: Removed misunderstanding, added better comments.