Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Are 8-bit outputs sRGB encoded? #988

Closed
kdashg opened this issue Mar 18, 2020 · 51 comments
Closed

Are 8-bit outputs sRGB encoded? #988

kdashg opened this issue Mar 18, 2020 · 51 comments
Milestone

Comments

@kdashg
Copy link
Contributor

kdashg commented Mar 18, 2020

They probably should be spec'd at such, unless there's major hardware that supports XR otherwise, but doesn't support sRGB outputs.

@cabanier
Copy link
Member

I'm unsure this needs to be called out in the spec since that's already defined by canvas.

@kdashg
Copy link
Contributor Author

kdashg commented Mar 18, 2020

It's defined as sRGB color space, but sRGB encoding is different: https://hackmd.io/@jgilbert/sRGB-WebGL
In 8-bit sRGB encoding, 0.5 is 0xbb. 70% of the available values are darker than 50% gray, yielding more detailed darks, and generally better looking scenes.
Normal canvas encoding is 8-bit linear.
XR device outputs (especially headsets) have a preference for sRGB. I know of at least one device that requires sRGB encoded textures for its fastpath.

@asajeffrey
Copy link

Yeah, it seems there's two things here... There's the colour space, which is implicitly sRGB, but it might be worth making that clear, since the compositing path for immersive mode doesn't involve the browser compositor. There's also the texture format, which is trickier, since the browsers use linear formats, but the devices might have a preference for a non-linear one.

@cabanier
Copy link
Member

The color management chain must not be any different than what is done for a regular web page.

Chrome and Safari both have a color correct workflow so they can render with any colorspace into any color device and have the colors match. I believe Firefox doesn't have this.

@kdashg
Copy link
Contributor Author

kdashg commented Mar 18, 2020 via email

@asajeffrey
Copy link

This is also a perf issue on some platforms, where both RGB and sRGB work, but sRGB goes directly to the OS's compositor, and RGB gets an extra blit. We won't be able to match native perf on those platforms is we don't support the sRGB format.

@cabanier
Copy link
Member

This is also a perf issue on some platforms, where both RGB and sRGB work, but sRGB goes directly to the OS's compositor, and RGB gets an extra blit. We won't be able to match native perf on those platforms is we don't support the sRGB format.

I was under the impression that most systems allow you to write to a sRGB or linear sRGB buffer so you wouldn't need an extra blit.

@asajeffrey
Copy link

That's not what we've been told by the device manufacturer 🙁 We've not done the experiments to find out though, partly because the extra blit is there in the conversion from side-by-side to texture arrays, so we need to switch to layers first.

@cabanier
Copy link
Member

@asajeffrey does this mean you're looking into implementing layers? If so, I'd love to hear what feedback you have.

@asajeffrey
Copy link

Yeah, it's next on my queue. Our motivation is to get a blit-free path for immersive XR sessions. We'll see if that's possible!

@toji toji added this to the Future milestone Mar 27, 2020
@cabanier
Copy link
Member

With Oculus browser's move to OpenXR, this implication that we write sRGB colors to an RGB texture has become an issue for us.
Previously, the Oculus API enabled us to create an RGB swapchain and the compositor fixed up the colors for us. In OpenXR, there is not such API and this is now requiring the browser to write into an sRGB swapchain but pretend it is RGB.

I looked at the code that was written by Microsoft and they seem to be doing the same thing.

What can we do to work around this?

@cabanier
Copy link
Member

/facetoface

@klausw
Copy link
Contributor

klausw commented Apr 14, 2022

To avoid confusion, please don't just use "sRGB" by itself in this issue. As @kdashg pointed out, it's ambiguous if this means linear color values vs gamma-adjusted nonlinear values, or a distinction between specific color spaces such as sRGB vs CIE RGB or others.

FWIW, https://www.w3.org/TR/webxr-ar-module-1/#xr-compositor-behaviors defines blending modes for compositing the rendered buffers., i.e. "source-over" vs "lighter" for alpha-blend and additive environment blending respectively. The linked formulas appear to assume linear color values. If the provided buffer were using gamma-adjusted nonlinear sRGB, blending would require a conversion to linear color values and then back to gamma-adjusted values for final output.

@cabanier
Copy link
Member

cabanier commented Apr 14, 2022

FWIW, https://www.w3.org/TR/webxr-ar-module-1/#xr-compositor-behaviors defines blending modes for compositing the rendered buffers., i.e. "source-over" vs "lighter" for alpha-blend and additive environment blending respectively. The linked formulas appear to assume linear color values. If the provided buffer were using gamma-adjusted nonlinear sRGB, blending would require a conversion to linear color values and then back to gamma-adjusted values for final output.

That is indeed happening in the compositor: sRGB textures are linearized and then blended.
The issue is that the browser writes non-linearized values (sRGB) to a linearized buffer (RGB). Because of this, when the linearized buffer is fed directly to the compositing step, the colors are too bright.

@klausw
Copy link
Contributor

klausw commented Apr 14, 2022

It looks as if there's multiple potential issues here:

  • The gamma transfer function is applied too few or too many times, resulting in too-bright or too-dark output. This would happen if there's a mismatch between the application and the compositor in assuming that color values are linear or nonlinear. The spec needs to be clear about the intended interpretation here. This applies to both VR and AR modes.
  • Alpha blending is done incorrectly due to using linear math on nonlinear color values. This results in correct colors for opaque rendered content, but incorrect blending of partially transparent rendered content. (I think Chrome's phone AR mode may be guilty of that, this is something to follow up on.)
  • Ambiguous or misleading spec - for example, if the blending modes described in https://www.w3.org/TR/webxr-ar-module-1/#xr-compositor-behaviors are intended to work on nonlinear color values, I think that the spec should be explicit about that and mention that the values need to be linearized before blending and then converted back to nonlinear values for display. (Currently it just links to formulas using linear blending.)
  • Wrong color space in the sense of different primaries, i.e. the application wants to use a wide-gamut space such as Adobe RGB while the compositor expects sRGB. (I get the impression that this isn't the core concern for this issue, but in the future devices may want to add support for other color spaces.)
  • Changing the white point, for example to avoid a color temperature mismatch between rendered content and the real world on an additive headset. (Also not the core issue here, but maybe something to address in the future?)

@cabanier
Copy link
Member

cabanier commented Apr 14, 2022

  • The gamma transfer function is applied too few or too many times, resulting in too-bright or too-dark output. This would happen if there's a mismatch between the application and the compositor in assuming that color values are linear or nonlinear. The spec needs to be clear about the intended interpretation here. This applies to both VR and AR modes.
  • Alpha blending is done incorrectly due to using linear math on nonlinear color values. This results in correct colors for opaque rendered content, but incorrect blending of partially transparent rendered content. (I think Chrome's phone AR mode may be guilty of that, this is something to follow up on.)

I suspect that this is a web platform issue that applies equally to 2d content.
There are some comments from Microsoft in the chromium codebase that infer that this is the case. @RafaelCintron

  • Ambiguous or misleading spec - for example, if the blending modes described in https://www.w3.org/TR/webxr-ar-module-1/#xr-compositor-behaviors are intended to work on nonlinear color values, I think that the spec should be explicit about that and mention that the values need to be linearized before blending and then converted back to nonlinear values for display. (Currently it just links to formulas using linear blending.)

I'm unsure if that's needed. How is the blending done today on a 2D page. Isn't it the same?
Also, layers allow the creation of sRGB textures. How does blending work there? It seems strange if it would linearize.

  • Wrong color space in the sense of different primaries, i.e. the application wants to use a wide-gamut space such as Adobe RGB while the compositor expects sRGB. (I get the impression that this isn't the core concern for this issue, but in the future devices may want to add support for other color spaces.)
  • Changing the white point, for example to avoid a color temperature mismatch between rendered content and the real world on an additive headset. (Also not the core issue here, but maybe something to address in the future?)

These 2 points are more about color management. OpenXR has APIs for those and we picked one for WebXR. Would it make sense to expose them.

@klausw
Copy link
Contributor

klausw commented Feb 7, 2024

Can we please add an unambiguous statement to the spec what the color encoding (in the sense of linear vs sRGB "gamma" curve) is supposed to be for an immersive session, to avoid diverging implementations?

According to @toji the expected behavior is that apps should use sRGB output encoding, matching what they'd use for plain 2D rendering to a canvas.

Adding to the confusion, the WebXR Layers API appears to specify RGBA as the default color format for projection layers which would be a linear encoding:

https://www.w3.org/TR/webxrlayers-1/#xrprojectionlayerinittype

dictionary XRProjectionLayerInit {
  //[...]
  GLenum colorFormat = 0x1908; // RGBA

It's also a bit of a trap that apparently OpenXR swapchains need an API-specific sRGB format to avoid being interpreted as linear, so I think an implementation on top of OpenXR could easily use the wrong encoding.

I think this is causing real world issues, according to aframevr/aframe#5444 the Apple Vision Pro appears to be treating WebXR color data as linear, causing dark colors to appear too bright and washed out. @AdaRoseCannon FYI.

It would be unfortunate if apps start hardcoding compensating measures based on device name or similar, since then the result would look wrong if an implementation fixes this later.

@cabanier
Copy link
Member

cabanier commented Feb 7, 2024

According to @toji the expected behavior is that apps should use sRGB output encoding, matching what they'd use for plain 2D rendering to a canvas.

This is correct, as in: the OS should treat the buffer that is produced by WebXR (or WebGL canvas) as sRGB.

Adding to the confusion, the WebXR Layers API appears to specify RGBA as the default color format for projection layers which would be a linear encoding:

That spec is correct. All compositing in WebXR and WebXR layers is done in linear space.

It's also a bit of a trap that apparently OpenXR swapchains need an API-specific sRGB format to avoid being interpreted as linear, so I think an implementation on top of OpenXR could easily use the wrong encoding.

What is that API specific sRGB format?

I think this is causing real world issues, according to aframevr/aframe#5444 the Apple Vision Pro appears to be treating WebXR color data as linear, causing dark colors to appear too bright and washed out. @AdaRoseCannon FYI.

This is simply a bug in AVP's rendering pipeline. Their rendering in 3D should match what is done in 2D. WebGL draws the same pixels in either world.
As I mentioned before in this thread, this also caused some grief for Quest browser but once we understood the problem, it wasn't too hard to fix.

It would be unfortunate if apps start hardcoding compensating measures based on device name or similar, since then the result would look wrong if an implementation fixes this later.

yes, hopefully Apple can fix this soon so authors don't start working around it by adding code based on the user agent. :-@

@klausw
Copy link
Contributor

klausw commented Feb 7, 2024

This is correct, as in: the OS should treat the buffer that is produced by WebXR (or WebGL canvas) as sRGB.

I think we're in agreement here, but as far as I can tell the spec doesn't say this anywhere. Also, I think it would be useful to have a WebXR sample that tests this, for example by showing a dithered pattern next to a color gradient similar to http://www.lagom.nl/lcd-test/gamma_calibration.php . (This brings back memories of the discussions around premultiplied alpha for additive blend mode in immersive-web/webxr-ar-module#14 where there was also an implementation inconsistency.)

Adding to the confusion, the WebXR Layers API appears to specify RGBA as the default color format for projection layers which would be a linear encoding:

That spec is correct. All compositing in WebXR and WebXR layers is done in linear space.

Those are two orthogonal issues. Yes, all compositing in general should be done in linear space, otherwise the result is incorrect. However, this does not mean that the inputs need to be provided in a linear buffer. As long as it's properly annotated in a way recognized by the compositor, the input layers can be linear 8-bit RGB, sRGB8, a floating point format, or anything else that's supported. If for example the input layer is marked as an SRGB8 texture, a texture read in the compositor shader will automatically convert it to linear for further processing. (I think I got this wrong in the earlier comment #988 (comment) - it's fine to specify blending algorithms in linear format as long as the input format conversion happens correctly before this, as is the case when a shader reads from a properly typed texture.)

Overall, I think it seems better to think of SRGB8 as a weird number format, analogous to floating point, in the sense that it just stores numbers in a different way to preserve accuracy for low intensities.

It's also a bit of a trap that apparently OpenXR swapchains need an API-specific sRGB format to avoid being interpreted as linear, so I think an implementation on top of OpenXR could easily use the wrong encoding.

What is that API specific sRGB format?

See the link, it mentions DXGI_FORMAT_R8G8B8A8_UNORM_SRGB, GL_SRGB8_ALPHA8, and VK_FORMAT_R8G8B8A8_SRGB.

The note there is also relevant: OpenXR applications should avoid submitting linear encoded 8 bit color data (e.g. DXGI_FORMAT_R8G8B8A8_UNORM) whenever possible as it may result in color banding. This is indeed an issue for WebXR also - storing color data in a linear 8-bit buffer irreversibly loses data for low-intensity colors. Due to the intensity slope, I think dark colors only have about 5 bits of usable precision. Using an SRGB8 format avoids this issue - the shader load auto-converts to a higher-precision internal format when doing shading calculations, and it can convert the end result back to sRGB (if needed) with effectively no data loss. If it reads the data from a linear 8-bit buffer, that's not possible.

@cabanier
Copy link
Member

cabanier commented Feb 7, 2024

This is correct, as in: the OS should treat the buffer that is produced by WebXR (or WebGL canvas) as sRGB.

I think we're in agreement here, but as far as I can tell the spec doesn't say this anywhere. Also, I think it would be useful to have a WebXR sample that tests this, for example by showing a dithered pattern next to a color gradient similar to http://www.lagom.nl/lcd-test/gamma_calibration.php .

Do canvas 2d, webgl or css specify this? I feel that this is a generally underspecified corner.
My assertion is that if you draw in 2D and make the same calls in 3D, it must look the same.

Adding to the confusion, the WebXR Layers API appears to specify RGBA as the default color format for projection layers which would be a linear encoding:

That spec is correct. All compositing in WebXR and WebXR layers is done in linear space.

Those are two orthogonal issues. Yes, all compositing in general should be done in linear space, otherwise the result is incorrect. However, this does not mean that the inputs need to be provided in a linear buffer.

I wasn't talking about the inputs; WebGL specifies the behavior there.
It's that the buffer that the inputs are rendered into, needs to be linear RGB.

Overall, I think it seems better to think of SRGB8 as a weird number format, analogous to floating point, in the sense that it just stores numbers in a different way to preserve accuracy for low intensities.

I'm unsure what you mean by that.

It's also a bit of a trap that apparently OpenXR swapchains need an API-specific sRGB format to avoid being interpreted as linear, so I think an implementation on top of OpenXR could easily use the wrong encoding.

What is that API specific sRGB format?

See the link, it mentions DXGI_FORMAT_R8G8B8A8_UNORM_SRGB, GL_SRGB8_ALPHA8, and VK_FORMAT_R8G8B8A8_SRGB.

These are for swapchains for different graphics drivers. DXGI_FORMAT_R8G8B8A8_UNORM_SRGB is for a DirectX swapchaing, GL_SRGB8_ALPHA8 for an OpenGL one and VK_FORMAT_R8G8B8A8_SRGB for a Vulkan one.
AFAIK for OpenGL, it is using the standard internal format to specify the colorspace.

The note there is also relevant: OpenXR applications should avoid submitting linear encoded 8 bit color data (e.g. DXGI_FORMAT_R8G8B8A8_UNORM) whenever possible as it may result in color banding. This is indeed an issue for WebXR also - storing color data in a linear 8-bit buffer irreversibly loses data for low-intensity colors. Due to the intensity slope, I think dark colors only have about 5 bits of usable precision. Using an SRGB8 format avoids this issue - the shader load auto-converts to a higher-precision internal format when doing shading calculations, and it can convert the end result back to sRGB (if needed) with effectively no data loss. If it reads the data from a linear 8-bit buffer, that's not possible.

Yes, that would be the case IF there was a conversion.
If you create a swapchain in linear RGB, the compositor will apply a conversion which will drop precision and make it look too bright. This is why it should be created as sRGB but exposed to the page as linear.

@klausw
Copy link
Contributor

klausw commented Feb 7, 2024

Do canvas 2d, webgl or css specify this? I feel that this is a generally underspecified corner.
My assertion is that if you draw in 2D and make the same calls in 3D, it must look the same.

I think this just started out as the expected default behavior because CRT monitors happened to have an approximately sRGB response curve, so the pixels that apps wrote into output buffers got interpreted as sRGB.

I 100% agree that apps should look the same for 2D and XR output without needing to special-case their color handling for immersive sessions. For example, using the recommended color management in Three.JS adds a linear-to-sRGB conversion for the final rendering result. (See "Output color space" here.)

I wasn't talking about the inputs; WebGL specifies the behavior there.
It's that the buffer that the inputs are rendered into, needs to be linear RGB.

By "inputs" I meant inputs to the compositor, which is the output of the WebXR app. This does NOT need to be linear RGB. The format can be whatever the XR app and the compositor agree on. If the app provides an XRProjectionLayer with format "RGB", it's linear, and if it provides format "SRGB8", the data is stored in nonlinear format and the compositor converts it to linear when it's reading from the texture buffer. (Otherwise the colorFormat attribute to XRProjectionLayerInit would be pointless?)

Overall, I think it seems better to think of SRGB8 as a weird number format, analogous to floating point, in the sense that it just stores numbers in a different way to preserve accuracy for low intensities.

I'm unsure what you mean by that.

Storing color intensities in 8 bits per channel means there are only 256 different values. If you store data as 8-bit linear and eventually convert that to sRGB for final display output, you lose a lot of precision for dark colors since there's no possible 8-bit linear input value to represent them. If you encode data as 8-bit sRGB, you effectively have more bits of precision for dark colors.

Here's a JS demonstration of a lossy conversion when forcing colors into an 8-bit linear intermediate encoding:

let p = [];
for (let i = 0; i < 256; ++i) { p[i] = i; }
function srgbToLinear(v) { return v <= 0.04045 ? v / 12.92 : Math.pow((v+0.055)/1.055, 2.4); }
function srgbToLinear8bit(v) { return Math.round(255 * srgbToLinear(v / 255)); }
function linearToSrgb(v) { return v <= 0.0031308 ? v * 12.92 : 1.055 * Math.pow(v, 1/2.4) - 0.055; }
function linearToSrgb8bit(v) { return Math.round(255 * linearToSrgb(v / 255)); }

p.map(srgbToLinear8bit);
=> [0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 2, 2, 2, 2, 2, 2, 2, 2, 3, 3, 3, 3, 3, 3, 4, 4, 4, 4, 4, 5, 5, 5, 5, 6, 6, 6, 6, 7, 7, 7, 8, 8, 8, 8, 9, 9, 9, 10, 10, 10, 11, 11, 12, 12, 12, 13, 13, 13, 14, 14, 15, 15, 16, 16, 17, 17, 17, 18, 18, 19, 19, 20, 20, 21, 22, 22, 23, 23, 24, 24, 25, 25, 26, 27, 27, 28, 29, 29, 30, 30, 31, 32, , 253, 255]

p.map(srgbToLinear8bit).map(linearToSrgb8bit);
=>  [0, 0, 0, 0, 0, 0, 0, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 22, 22, 22, 22, 22, 22, 22, 22, 28, 28, 28, 28, 28, 28, 34, 34, 34, 34, 34, 38, 38, 38, 38, 42, 42, 42, 42, 46, 46, 46, 50, 50, 50, 50, 53, 53, 53, 56, 56, 56, 59, 59, 61, 61, 61, 64, 64, 64, 66, 66, 69, 69, 71, 71, 73, 73, 73, 75, 75, 77, 77, 79, 79, 81, 83, 83, 85, 85, 86, 86, 88, 88, 90, 92, 92, 93, 95, 95, 96, 96, 98, 99, , 254, 255]

Note the data loss for low-intensity colors. A smooth color gradient from 0-28 only has 13 and 22 as intermediate values, leading to color banding.

And this is an inherent problem no matter how the app tries to render. There's simply no way to get final sRGB intensities on screen between 1 and 12, your choices are just 0 or 13.

If instead the app provides data to the compositor in a nonlinear sRGB 8-bit encoding, the dark colors keep their precision. (In exchange for having fewer distinct bright colors, but this is far less visually obvious.)

Yes, the compositor will internally do linear calculations after reading data from a sRGB texture (and converting to linear), but it does NOT have to crush the values to 8 bits while doing its computations since the GPU does shader calculations at higher internal precision. At the end it has to convert to sRGB, and it's important to avoid having an 8-bit linear intermediate texture format in the path to that.

These are for swapchains for different graphics drivers.

Yes, but I think the issue here is that textures can be typed as either linear RGB or nonlinear sRGB, and using sRGB is generally opt-in behavior because earlier APIs tended to silently assume linear encodings. Marking a texture as sRGB specifically means that a shader sampling that texture will do an sRGB-to-linear conversion automatically, so that the shader code can assume that all further operations happen in linear color space. The benefit of doing so is that the GPU typically does fragment computations in a higher-accuracy internal format, and the resulting data has more than 8 bits of linear precision for dark colors.

Yes, that would be the case IF there was a conversion.

Marking a texture as sRGB specifically means that there is an auto-conversion happening when the texture gets read by the compositor's shader.

If you create a swapchain in linear RGB, the compositor will apply a conversion which will drop precision and make it look too bright. This is why it should be created as sRGB but exposed to the page as linear.

I don't understand what you mean here. Yes, if an app creates sRGB output values (as usual for final output) and the compositor interprets it as a type=linear texture, it will be too bright. The texture needs to be marked as sRGB so that the texture read correctly converts it to linear. That's supposed to be the default behavior, and I think should be the recommended method for the Layers extension by marking the projection layer as colorFormat=SRGB.

@klausw
Copy link
Contributor

klausw commented Feb 7, 2024

For the email record, I just edited the previous comment to add a missing section:

[...] your choices are just 0 or 13.

If instead the app provides data to the compositor in a nonlinear sRGB 8-bit encoding, the dark colors keep their precision. (In exchange for having fewer distinct bright colors, but this is far less visually obvious.)

Yes, the compositor will internally do linear calculations after reading data from a sRGB texture (and converting to linear), but it does NOT have to crush the values to 8 bits [...]

@klausw
Copy link
Contributor

klausw commented Feb 7, 2024

At the end it has to convert to sRGB

That's a bit misleadingly phrased. That's the case for a traditional display pipeline, but in the end it's up to the GPU and display hardware how the final output gets turned into light. The point is that it is normally able to show distinct colors for low-intensity values beyond what an 8-bit linear encoding can represent.

@cabanier
Copy link
Member

cabanier commented Feb 7, 2024

At the end it has to convert to sRGB

That's a bit misleadingly phrased.That's the case for a traditional display pipeline, but in the end it's up to the GPU and display hardware how the final output gets turned into light. The point is that is normally able to show distinct colors for low-intensity values beyond what an 8-bit linear encoding can represent.

OpenXR made the decision to composite in sRGB so it's still the case there. There are of course steps after the final composite to map it to the current display profile.

@cabanier
Copy link
Member

cabanier commented Feb 7, 2024

If you create a swapchain in linear RGB, the compositor will apply a conversion which will drop precision and make it look too bright. This is why it should be created as sRGB but exposed to the page as linear.

I don't understand what you mean here. Yes, if an app creates sRGB output values (as usual for final output) and the compositor interprets it as a type=linear texture, it will be too bright.

This is not what I said.
The browser creates linear output values. The compositor is supposed to treat it like sRGB and not do any conversion.

The texture needs to be marked as sRGB so that the texture read correctly converts it to linear. That's supposed to be the default behavior, and I think should be the recommended method for the Layers extension by marking the projection layer as colorFormat=SRGB.

No, that would be incorrect. The default of the layers API should match the default WebGL canvas behavior which is rgb; regular WebXR has the same default.
Again, I agree that the swapchain should be allocated as sRGB but when exposed to the page, it should be act as if it's linear.
I know this sounds weird but there are facilities to let you do this :-)

@dmarcos
Copy link
Contributor

dmarcos commented Feb 7, 2024

Thanks for the discussion. I can hardly follow :). My two cents. regardless of specifics of a solution I agree behavior should be consistent on regular WebGL and immersive, and also across browsers. A-Frame might implement a user agent check as an interim solution unless a fix is coming soon (on AVP side?). There are no mentions to Safari in the Vision OS 1.1 beta. I imagine a fix might take at least weeks or few months.

@cabanier
Copy link
Member

cabanier commented Feb 7, 2024

Thanks for the discussion. I can hardly follow :). My two cents. regardless of specifics of a solution I agree behavior should be consistent on regular WebGL and immersive, and also across browsers. A-Frame might implement a user agent check as an interim solution unless a fix is coming soon (on AVP side?).

Please wait for Apple to fix this obvious bug. Don't add temporary workarounds that might have other side effects.
This is not an easy thing to fix from the user side since not all GL calls will do the right conversion.

@klausw
Copy link
Contributor

klausw commented Feb 7, 2024

Please wait for Apple to fix this obvious bug. Don't add temporary workarounds that might have other side effects.
This is not an easy thing to fix from the user side since not all GL calls will do the right conversion.

+1. (I guess if someone wants to develop an application to be ready for a future fixed browser version, it would be OK to temporarily add a local workaround for that, but it shouldn't be deployed on an end user facing web site.)

The browser creates linear output values. The compositor is supposed to treat it like sRGB and not do any conversion.

That sounds wrong, and I don't know if it's just a terminology mismatch. I think we're in agreement that the web app writes its output into the opaque framebuffer using sRGB encoding, same as if it was targeting a 2D display. The browser needs to send this data to the system compositor, and basically has two choices:

  • convert the sRGB data to linear, and pass them to the compositor as linear. Using an 8-bit format for this would be a bad idea due to the data loss issues I describe, but it could use something like GL_RGB10_A2 assuming that's a valid swapchain format for an OpenXR backend.
  • keep the sRGB data as sRGB, and tell that to the compositor by using GL_SRGB8_ALPHA8 format or equivalent for the swapchain buffer. The compositor will internally convert this to linear when needing to do blending operations, then convert back to sRGB for final output. Alternatively, if the compositing scenario is trivial (for example a single projection layer with no other content), it could send the input sRGB data directly to the output as-is without doing any conversions. (This would be equivalent to converting sRGB to linear and back to sRGB, assuming it has sufficient internal precision to do so without visible losses.)

When you say "the browser creates linear output values", do you mean it produces numbers that the compositor can use as-is? That's correct assuming that the compositor is expecting sRGB.

No, that would be incorrect. The default of the layers API should match the default WebGL canvas behavior which is rgb; regular WebXR has the same default.

I don't get this part. Regular WebGL canvas is interpreted as sRGB, that's why for example the recommended Three.JS rendering setup puts a gl_FragColor = linearToSRGB(gl_FragColor) equivalent at the end of shaders. As far as I know there's no place to declare the canvas behavior anywhere, it's implicitly treated that way. I'm assuming that's a relic of pre-compositor days when the browser just put 0..255 values in a framebuffer, and those numbers directly drove electron guns to light up phosphors. That last step had a nonlinear response corresponding to something like gamma 2.2 or sRGB, so any physically-based rendering which naturally works in a linear space needs to convert its output to sRGB so that it displays properly.

I think it would be wrong if a Layers extension XRProjectionLayer with colorFormat=RGBA would treat the data as sRGB and for example pass it on to the compositor without conversion to a GL_SRGB8_ALPHA8 OpenXR swapchain buffer. While that would match the default WebXR rendering, I think it doesn't make sense - what would colorFormat=SRGB8 do differently then? To the best of my knowledge, the default WebXR (and 2D canvas) rendering is equivalent to SRGB8 and would match being used with an OpenXR SRGB8 swapchain.

@klausw
Copy link
Contributor

klausw commented Feb 7, 2024

I was being sloppy in the last paragraph about alpha channel details - the important part is nonlinear sRGB vs linear RGB, and either one can have an alpha channel added if needed. The core question is, what's the behavior difference between colorFormat=RGBA and colorFormat=SRGB_ALPHA_EXT? If the default of colorFormat=RGBA is intended to match default WebXR behavior (which should match 2D canvas which expects sRGB data), what does colorFormat=SRGB_ALPHA_EXT do differently?

@cabanier
Copy link
Member

cabanier commented Feb 7, 2024

The browser creates linear output values. The compositor is supposed to treat it like sRGB and not do any conversion.

That sounds wrong, and I don't know if it's just a terminology mismatch.

That may be "wrong" but that is how browsers work.

I think we're in agreement that the web app writes its output into the opaque framebuffer using sRGB encoding, same as if it was targeting a 2D display. The browser needs to send this data to the system compositor, and basically has two choices:

No, we are not in agreement. The framebuffer is in linear RGB same as WebGL. The swapchain is in sRGB.

When you say "the browser creates linear output values", do you mean it produces numbers that the compositor can use as-is? That's correct assuming that the compositor is expecting sRGB.

No, the browser creates linear values. Technically this is wrong but that is how they're exposed to the page.
Browsers are not alone in making this error; there were enough games that made the same assumption that Oculus' old API had a flag to work around this.

No, that would be incorrect. The default of the layers API should match the default WebGL canvas behavior which is rgb; regular WebXR has the same default.

I don't get this part. Regular WebGL canvas is interpreted as sRGB, that's why for example the recommended Three.JS rendering setup puts a gl_FragColor = linearToSRGB(gl_FragColor) equivalent at the end of shaders.

It only does that conditionally and looking at the code, only for sRGB render targets.

I think it would be wrong if a Layers extension XRProjectionLayer with colorFormat=RGBA would treat the data as sRGB and for example pass it on to the compositor without conversion to a GL_SRGB8_ALPHA8 OpenXR swapchain buffer. While that would match the default WebXR rendering, I think it doesn't make sense - what would colorFormat=SRGB8 do differently then? To the best of my knowledge, the default WebXR (and 2D canvas) rendering is equivalent to SRGB8 and would match being used with an OpenXR SRGB8 swapchain.

No, changing the default to sRGB would break each WebXR experience because now drawing with RGB will get gamma correction applied.
I'm happy to discuss this further during a call.

/agenda discuss linear vs sRGB

@probot-label probot-label bot added the agenda Request discussion in the next telecon/FTF label Feb 7, 2024
@klausw
Copy link
Contributor

klausw commented Feb 7, 2024

It only does that conditionally and looking at the code, only for sRGB render targets.

See Updates to Color Management in three.js r152 - the point is that while it's natural to do rendering in a linear space, treating the output as linear is wrong, leading to plasticky-looking unrealistic shading. Setting renderer.outputEncoding = sRGBEncoding to correctly produce sRGB-encoded output produces much more natural-looking scenes. This does NOT change anything in the way the canvas is treated (there's no way to change that), it's just an internal change to color management to better match the display characteristics. This three.js change was made conditional to avoid breaking existing applications - they may have made their own fixups to get the desired colors, or might prefer the current look.

@cabanier
Copy link
Member

cabanier commented Feb 7, 2024

Regardless of what three.js did, colors between WebGL and WebGL under WebXR must match. Marking the destination as sRGB will break this.

@klausw
Copy link
Contributor

klausw commented Feb 7, 2024

No, changing the default to sRGB would break each WebXR experience because now drawing with RGB will get gamma correction applied.

Ah, I think I figured out a potential disconnect. GPU Gems 3 - The Importance of Being Linear says in 24.4.2:

The last step before display is to gamma-correct the final pixel values so that when they're displayed on a monitor with nonlinear response, the image looks "correct." Specifying an sRGB frame buffer leaves the correction to the GPU, and no changes to shaders are required. Any value returned in the shader is gamma-corrected before storage in the frame buffer (or render-to-texture buffer). Furthermore, on GeForce 8-class and later hardware, if blending is enabled, the previously stored value is converted back to linear before blending and the result of the blend is gamma-corrected.

What I was talking about was that reading from an sRGB buffer will automatically apply an sRGB-to-linear conversion.

However, it appears that writing to an sRGB framebuffer will have the GPU auto-apply gamma correction, and we indeed don't want that for WebXR content.

So the canvas and default WebXR projection layer contain sRGB data, and need to be treated as sRGB when being read by a compositor, or when sent directly to a display, but the conversion to sRGB needs to happen manually for historical reasons. So marking an output texture as sRGB would be wrong if this activates the GPU's auto gamma correction since we don't want that. Basically, we need a hybrid texture that appears as linear RGB when being written to (because the writer already should have done gamma correction), contains sRGB data, and should be read by the compositor as a sRGB texture.

Does that clear things up? Apologies for the misunderstanding on my part, I wasn't aware of the auto sRGB gamma correction when writing to declared sRGB textures, assuming that that's what's happening here.

So in the Layers API, declaring the texture as SRGB8_ALPHA8 would mean that the GPU converts to sRGB on write, and the app should skip doing its own gamma correction. In either case the content of the texture is sRGB format color values, and would be read as such by the compositor.

@klausw
Copy link
Contributor

klausw commented Feb 7, 2024

This interpretation matches the OpenXR backend in Chrome - it tells OpenXR that the swapchain is DXGI_FORMAT_R8G8B8A8_UNORM_SRGB, but sets up the browser-side shared image as linear RGBA8888.

In any case, I think this kind of hybrid buffer is rather surprising and I think the spec would benefit from being clearer about that. (Or is this obvious to everyone else and I'm just being dense?)

@cabanier
Copy link
Member

cabanier commented Feb 7, 2024

What I was talking about was that reading from an sRGB buffer will automatically apply an sRGB-to-linear conversion.

However, it appears that writing to an sRGB framebuffer will have the GPU auto-apply gamma correction, and we indeed don't want that for WebXR content.

+1

So the canvas and default WebXR projection layer contain sRGB data, and need to be treated as sRGB when being read by a compositor, or when sent directly to a display, but the conversion to sRGB needs to happen manually for historical reasons. So marking an output texture as sRGB would be wrong if this activates the GPU's auto gamma correction since we don't want that. Basically, we need a hybrid texture that appears as linear RGB when being written to (because the writer already should have done gamma correction), contains sRGB data, and should be read by the compositor as a sRGB texture.

Does that clear things up? Apologies for the misunderstanding on my part, I wasn't aware of the auto sRGB gamma correction when writing to declared sRGB textures, assuming that that's what's happening here.

Yes, this is exactly the way it's implemented in the Quest browser. I suspect that the developers on AVP Safari missed this step.

So in the Layers API, declaring the texture as SRGB8_ALPHA8 would mean that the GPU converts to sRGB on write, and the app should skip doing its own gamma correction. In either case the content of the texture is sRGB format color values, and would be read as such by the compositor.

no need for apologies. It IS a confusing subject :-)

@cabanier
Copy link
Member

cabanier commented Feb 7, 2024

This interpretation matches the OpenXR backend in Chrome - it tells OpenXR that the swapchain is DXGI_FORMAT_R8G8B8A8_UNORM_SRGB, but sets up the browser-side shared image as linear RGBA8888.

In any case, I think this kind of hybrid buffer is rather surprising and I think the spec would benefit from being clearer about that. (Or is this obvious to everyone else and I'm just being dense?)

It is NOT obvious which is why I raised this issue in the past. I don't know how to fix it in spec text though since this behavior is not limited to just WebXR.

@klausw
Copy link
Contributor

klausw commented Feb 7, 2024

@toji pointed me towards https://registry.khronos.org/OpenGL/extensions/ARB/ARB_framebuffer_sRGB.txt which explains the reading and writing behavior. Again, sorry about causing confusion by not being aware of the conversion-on-write behavior, but it still seems weird to me to be working with allegedly linear buffers containing sRGB data.

From the overview:

This extension adds a framebuffer capability for sRGB framebuffer
update and blending. When blending is disabled but the new sRGB
updated mode is enabled (assume the framebuffer supports the
capability), high-precision linear color component values for red,
green, and blue generated by fragment coloring are encoded for sRGB
prior to being written into the framebuffer. When blending is enabled
along with the new sRGB update mode, red, green, and blue framebuffer
color components are treated as sRGB values that are converted to
linear color values, blended with the high-precision color values
generated by fragment coloring, and then the blend result is encoded
for sRGB just prior to being written into the framebuffer.

(The following sections go into detail how sRGB transforms are applied on read and write.)

@klausw
Copy link
Contributor

klausw commented Feb 8, 2024

I don't know how to fix it in spec text though since this behavior is not limited to just WebXR.

At a very high level, maybe something along these lines?

WebXR framebuffers always contains sRGB data, meaning that the pixels are intended to be interpreted as a nonlinear intensity curve roughly similar to a traditional CRT display. However, to remain compatible with 2D canvas rendering the default behavior is to NOT do any automatic color curve conversion when drawing to the buffer. Applications need to do their own gamma correction. The recommended method is to do internal lighting calculations in a linear space (see The Importance of Being Linear), and then apply a linear-to-sRGB conversion as a final shader step when writing the output fragment. The browser and GPU will NOT do this for you automatically. If you are using a rendering software package, you should set it to produce nonlinear sRGB output.

Optionally, when using the Layers extension, applications can set up an XRProjectionLayer using a non-default sRGB format. Doing so enables automatic gamma correction, the linear internal shader data is auto-converted to sRGB when being written by the GPU. In this mode, do NOT add your own linear-to-sRGB conversion, for example you should configure your rendering engine to use a linear output color space. Remember that the final buffer content must always be sRGB, the distinction is simply if you do the conversion to sRGB yourself, or if you ask the GPU to do it automatically.

When using a linear RGB colorFormat, your application must apply gamma correction and produce sRGB data from its shaders. When using a nonlinear sRGB colorFormat, the GPU does gamma correction automatically, and your shaders are expected to produce linear data so that they can be converted correctly. (Yes, this sounds backwards and is confusing.) The default WebXR behavior when not using the Layers API is equivalent to colorFormat=RGBA, meaning that the application must do gamma correction and produce sRGB format data from its shaders.

WebXR does not provide any way to produce buffers whose content would be interpreted as linear by the system compositor. (This would also be undesirable since linear 8-bit buffers typically produce color banding due to poor granularity for dark colors.)

For implementers, when not using the Layers API, or when using a Layers API linear RGB output format, the WebXR opaque framebuffer needs to be set up as a hybrid texture. It contains sRGB data, but needs to be exposed to client rendering code as a linear RGB texture so that the GPU does not apply an automatic linear-to-sRGB conversion on write. When this data is passed on to a system compositor or other reader, it must be declared as being in sRGB format (matching its actual data content) so that compositing correctly applies an sRGB-to-linear conversion in the GPU when reading the texture.

When implementing the Layers API, it should work to directly use the provided colorFormat as the texture format for the framebuffer that the client is rendering into, and this should cause the GPU to automatically apply linear-to-sRGB conversion for sRGB colorFormats. However, the texture must always be assumed to contain sRGB data and must be passed to the system compositor as an sRGB format as-is (with no separate data conversion step), using the appropriate vendor format such as DXGI_FORMAT_R8G8B8A8_UNORM_SRGB, GL_SRGB8_ALPHA8, or VK_FORMAT_R8G8B8A8_SRGB. The system compositor will then apply an sRGB-to-linear transform internally when reading the data from the framebuffer texture so that it can do compositing in a linear space, and then finally convert back to sRGB for post-compositing display.

@cabanier
Copy link
Member

cabanier commented Feb 8, 2024

WebXR framebuffers always contains sRGB data, meaning that the pixels are intended to be interpreted as a nonlinear intensity curve roughly similar to a traditional CRT display. However, to remain compatible with 2D canvas rendering the default behavior is to NOT do any automatic color curve conversion when drawing to the buffer. Applications need to do their own gamma correction. The recommended method is to do internal lighting calculations in a linear space (see The Importance of Being Linear), and then apply a linear-to-sRGB conversion as a final shader step when writing the output fragment. The browser and GPU will NOT do this for you automatically. If you are using a rendering software package, you should set it to produce nonlinear sRGB output.

I don't think we should say that. That implies that WebXR WebGL is different from regular WebGL.

Optionally, when using the Layers extension, applications can set up an XRProjectionLayer using a non-default sRGB format. Doing so enables automatic gamma correction, the linear internal shader data is auto-converted to sRGB when being written by the GPU. In this mode, do NOT add your own linear-to-sRGB conversion, for example you should configure your rendering engine to use a linear output color space. Remember that the final buffer content must always be sRGB, the distinction is simply if you do the conversion to sRGB yourself, or if you ask the GPU to do it automatically.

This is not correct. The projection layer has to be in RGB; just like it is in XRWebGLLayer. (You can query the format in GL)

When using a linear RGB colorFormat, your application must apply gamma correction and produce sRGB data from its shaders. When using a nonlinear sRGB colorFormat, the GPU does gamma correction automatically, and your shaders are expected to produce linear data so that they can be converted correctly. (Yes, this sounds backwards and is confusing.) The default WebXR behavior when not using the Layers API is equivalent to colorFormat=RGBA, meaning that the application must do gamma correction and produce sRGB format data from its shaders.

There is no reason to call out layers. Default projection layer uses the same code path as XRWebGLLayer.

WebXR does not provide any way to produce buffers whose content would be interpreted as linear by the system compositor. (This would also be undesirable since linear 8-bit buffers typically produce color banding due to poor granularity for dark colors.)

That would be up to the system compositor. I don't think we should say this.

For implementers, when not using the Layers API, or when using a Layers API linear RGB output format, the WebXR opaque framebuffer needs to be set up as a hybrid texture. It contains sRGB data, but needs to be exposed to client rendering code as a linear RGB texture so that the GPU does not apply an automatic linear-to-sRGB conversion on write. When this data is passed on to a system compositor or other reader, it must be declared as being in sRGB format (matching its actual data content) so that compositing correctly applies an sRGB-to-linear conversion in the GPU when reading the texture.

No difference between Layers and XRWebGLLayer. As you pointed out, android and desktop chrome are doing this as well.

When implementing the Layers API, it should work to directly use the provided colorFormat for the framebuffer that the client is rendering into. However, the texture must always be assumed to contain sRGB data and must be passed to the system compositor as an sRGB format, using the appropriate vendor format such as DXGI_FORMAT_R8G8B8A8_UNORM_SRGB, GL_SRGB8_ALPHA8, or VK_FORMAT_R8G8B8A8_SRGB.

This is also system dependent so can't put that in the spec...

Maybe we can say that colors created in WebXR or WebXR Layers with (linear) RGBA color format, MUST be processed as if they are in the sRGB color format. That is basically what our implementations are doing.

@klausw
Copy link
Contributor

klausw commented Feb 8, 2024

I don't think we should say that. That implies that WebXR WebGL is different from regular WebGL.

The point is that this is the same as regular WebGL, and that's also what a normal canvas WebGL application should be doing. (See the three.js documentation I had linked earlier.) Setting Three.JS to use sRGB output encoding looks correct both on a 2D screen and in WebXR. (It currently looks wrong in Apple Vision Pro since that does NOT interpret the data as sRGB. It treats the data as linear, meaning it looks much too bright for dark colors, and doesn't match the 2D mode.)

This is not correct. The projection layer has to be in RGB; just like it is in XRWebGLLayer. (You can query the format in GL)

I meant that the pixel data is going to be interpreted as sRGB once compositing is complete. The format as queried by GL needs to be RGB, but technically that's a lie since it doesn't match the pixel interpretation.

@cabanier
Copy link
Member

cabanier commented Feb 8, 2024

I don't think we should say that. That implies that WebXR WebGL is different from regular WebGL.

The point is that this is the same as regular WebGL, and that's also what a normal canvas WebGL application should be doing. (See the three.js documentation I had linked earlier.) Setting Three.JS to use sRGB output encoding looks correct both on a 2D screen and in WebXR. (It currently looks wrong in Apple Vision Pro since that does NOT interpret the data as sRGB. It treats the data as linear, meaning it looks much too bright for dark colors, and doesn't match the 2D mode.)

Even if you don't set sRGB encoding, the output will be different in Vision Pro. You can see it in the WebXR samples that draw directly without an encoding.

@klausw
Copy link
Contributor

klausw commented Feb 8, 2024

Even if you don't set sRGB encoding, the output will be different in Vision Pro. You can see it in the WebXR samples that draw directly without an encoding.

I think that's expected, as far as I understand there's no way to actually set an encoding on a WebGL canvas or a WebXR render buffer (other than using Layers with an SRGB colorFormat). Three.js's output encoding and color management is purely internal to the app, it just changes internal calculations. If the Vision Pro treats the WebXR buffer as linear, that would be expected to affect all WebXR applications consistently. (If it's inconsistent, that would mean something else is going on.)

There are various ways to get correct-looking results in a WebGL/WebXR app. Modern Three.JS prefers to work in linear space internally, reads from source textures by marking them as sRGB (most image sources use that) which makes the GPU convert from sRGB from linear when sampling them. It does lighting and blending in linear space. Then, it does a final linear-to-sRGB conversion in a shader when writing output pixels, assuming it's configured to use sRGB output which is the normal setting in this mode.

Alternatively, an old-style renderer may be completely oblivious to sRGB vs linear issues. It can read data from textures marked as linear RGB even though they contain nonlinear data, and write the result to the output framebuffer without doing any conversion. That way, the data gets written in its nonlinear form to the output buffer, and it looks (more or less) correct when the output interprets it as sRGB data. This works fine for playing videos for example, but isn't quite right when doing blending or lighting since that should happen in linear space.

I get the impression that the WebXR sample for video playback does this - as far as I can tell the input video texture is just GL.RGBA and gets written to the output buffer as-is, with no gamma correction, so it writes the video's presumably nonlinear pixels directly to the output. (I may have missed some post-conversion, but couldn't find any relevant-looking matches for pow in the source.)

However, I am confused by the PBR shader's gamma correction being disabled - does the rest of the PBR shader already work in sRGB space internally? The glTF spec says that colors are specified as sRGB, so maybe it just works with those directly? However, it's supposed to convert to linear for lighting and then back to nonlinear for display, and I'm a bit confused how this works. @toji, can you chime in? Your commit message that disabled gamma correction just says "Look, just... yeah. It's complicated." I understand the sentiment ;-)

@klausw
Copy link
Contributor

klausw commented Feb 8, 2024

Apologies for the triple post (on top of being annoyingly verbose about this in general). Github had appeared to fail to submit the post, but had actually done so quietly in the background.

@toji
Copy link
Member

toji commented Feb 8, 2024

I think that's expected, as far as I understand there's no way to actually set an encoding on a WebGL canvas

WebGL is supposed to use the drawingBufferColorSpace attribute to determine this. It defaults to "srgb" but can be changed after context creation. See https://registry.khronos.org/webgl/specs/latest/1.0/#5.14.1

But I'm not confident in how that's interpreted? It looks like the default framebuffer is allocated with a linear texture format initially, though it can be overridden to an explicitly sRGB format by calling gl.drawingBufferStorage().

(It is worth mentioning that Three.js does not call drawingBufferStorage() at all, so I think we should assume that Three.js is always writing to a linear drawing buffer.)

The spec for the PredefinedColorSpace type says that it specifies "the color space of the canvas's backing store." If we understand the "backing store" to be the WebGL drawing buffer in this case, then that plausibly means that the internal texture, which has a linear format, is interpreted as having sRGB values in it for the purposes of compositing. Hence why most WebGL (and WebGPU) apps find it necessary to apply gamma correction to their final output values.

The glTF spec says that colors are specified as sRGB, so maybe it just works with those directly? However, it's supposed to convert to linear for lighting and then back to nonlinear for display, and I'm a bit confused how this works.

The typical way that this works is that when loading a glTF file you would place images like the baseColorTexture in a texture that uses an sRGB format, and images like normal and metallic/roughness maps in textures with a linear RGB format. All graphics APIs will convert from the color encoding of the texture to linear when values are sampled in the shader, so as long as you used the right formats for each texture the conversion you mentioned will be done for you. Similarly, when writing out to a framebuffer the values are always given linearly and converted to the encoding of the framebuffer format internally. If your render target is an sRGB format then the conversion is implicit.

But, as I pointed out above, it looks like by default the WebGL framebuffer will be allocated with a linear format.

@toji, can you chime in?

I apologize for not responding directly so far, this continues to be an area of significant complexity and nuance that I find I have to re-learn every time it comes up. My confidence level in the correctness of my understanding is middling.

@cabanier
Copy link
Member

cabanier commented Feb 8, 2024

I added some clarifying text to the WebXR Layers spec. It matches the logic of both Chrome and Quest browser,

@toji
Copy link
Member

toji commented Feb 8, 2024

Rik's PR language matches my understanding as I gave above in terms of what's actually happening. (Putting aside whether or not that's the "best" way to handle it.)

One thing that I guess I'm still trying to wrap my head around, though, is how that logic actually gets applied in a typical compositor. For example, something like OpenXR. If we look at https://registry.khronos.org/OpenXR/specs/1.0/man/html/XrSwapchain.html it says "Images submitted in sRGB color space must be created using an API-specific sRGB format (e.g. DXGI_FORMAT_R8G8B8A8_UNORM_SRGB, GL_SRGB8_ALPHA8, VK_FORMAT_R8G8B8A8_SRGB) to apply automatic sRGB-to-linear conversion when read by the runtime. All other formats will be treated as linear values."

So does that imply that a bit-exact copy from the linear-but-containing-sRGB-values WebGL texture to the OpenXR swapchain sRGB texture must occur? Maybe Rik could give some insight into how that bit works in the Meta browser?

@cabanier
Copy link
Member

cabanier commented Feb 8, 2024

So does that imply that a bit-exact copy from the linear-but-containing-sRGB-values WebGL texture to the OpenXR swapchain sRGB texture must occur?

No, there is no copy. The same underlying texture data is used by the RGB and sRGB textures.

Maybe Rik could give some insight into how that bit works in the Meta browser?

Klaus pointed out above how this works in Chrome.
Quest browser uses glTextureView to alias the buffer storage of an srgb texture to rgb

@klausw
Copy link
Contributor

klausw commented Feb 8, 2024

Thank you Rik, I think your https://github.com/immersive-web/layers/pull/305/files addresses the main concern. I think it would be good to mention somewhere that the default WebXR projection layer (when not using Layers API) acts the same as colorFormat=RGBA/RGB, but I don't know a good place to put that.

But, as I pointed out above, it looks like by default the WebGL framebuffer will be allocated with a linear format.

I think we've reached a consensus here? I feel the important point is to distinguish between "buffer contains linear vs nonlinear pixels" (the mapping from numbers to intended brightness) vs "buffer's declared type is SRGB/RGB" which controls the GPU's automatic gamma conversion to/from sRGB when reading from or writing to a buffer.

In those terms, the default WebGL/WebXR buffers are expected to contain nonlinear pixels. For WebGL/WebXR rendering purposes it needs to be declared as a linear RGB/RGBA type for historical compatibility to ensure the GPU doesn't do gamma correction on output (the app must handle this). The system compositor needs to consume it as sRGB pixels. (Typically this would be done by marking it as a SRGB type and letting the GPU convert to linear on read for further compositing, but that's an implementation detail as long as the end result is equivalent.)

@cabanier
Copy link
Member

cabanier commented Feb 8, 2024

Thank you Rik, I think your https://github.com/immersive-web/layers/pull/305/files addresses the main concern. I think it would be good to mention somewhere that the default WebXR projection layer (when not using Layers API) acts the same as colorFormat=RGBA/RGB, but I don't know a good place to put that.

That should likely be in the WebXR. I'll see if I can find a place for it.

@DRx3D
Copy link

DRx3D commented Feb 8, 2024

I am quite confidently that my understanding of this part of rendering is noticeably less than Brandon's. Much of this is touching on work within Khronos where I start to rely on other experts. What I do know is the following:

  1. glTF uses sRGB for metal-roughness texture (see 3.9.2 Metallic-Roughness Material)
  2. I believe that the above also applies to PBR extensions; however, I am not making that claim
  3. The Commerce WG found that the tone mapping necessary to convert floating point GPU values to screen color space was poor and needed work. This work is mostly complete. It was done by Emmett Lalish (Google). Commerce WG is putting together a report on its findings. Many tone mapping systems use ACES; unfortunately, that was one of the poorest performing systems.

If this is an important discussion item, we should try to get Emmett to attend.

@cabanier
Copy link
Member

cabanier commented Feb 8, 2024

If this is an important discussion item, we should try to get Emmett to attend.

What you list, is mostly on the input side so it doesn't apply.
WebXR Layers does support the creation of a true sRGB target which might be better suited for the content you're describing.

@Yonet Yonet removed the agenda Request discussion in the next telecon/FTF label Mar 5, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

8 participants