Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Figure out what the additive blend mode should do with alpha values #14

Closed
Manishearth opened this issue Aug 16, 2019 · 59 comments
Closed

Comments

@Manishearth
Copy link
Contributor

https://immersive-web.github.io/webxr-ar-module/#dom-xrenvironmentblendmode-additive

A blend mode of additive indicates that the user’s surrounding environment is visible and the baseLayer will be shown additively against it. Alpha values in the baseLayer will be ignored, with the compositor treating all alpha values as 1.0. When this blend mode is in use black pixels will appear fully transparent, and there is no way to make a pixel appear fully opaque.

See #12 (comment) and #12 (comment)

It seems like the Hololens does what the spec says for its primary display (it includes alpha when compositing screencaps), however the Magic Leap is capable of differentiating between different alpha values used in the rendered scene, i.e. 50% red looks different from 100% red. It effectively uses the "lighter" composition algorithm.

We should decide on what the requirements should be here, and update the spec text for environment blend mode and automatic composition accordingly. It may be worth having two modes here (additive and additive-alpha?). Note that this is not selective darkening, if we start seeing devices with selective darkening we may need yet another blend mode.

cc @thetuvix @cabanier

@cabanier
Copy link
Member

It seems like the Hololens does what the spec says for its primary display (it includes alpha when compositing screencaps), however the Magic Leap is capable of differentiating between different alpha values used in the rendered scene, i.e. 50% red looks different from 100% red.

Hololens and ML One composite the scene the same way. The confusion comes from having the alpha premultiplied with the color. This makes it so the compositor can simply ignore the alpha channel and send the premultipled RGB values to the projector.
So alpha is not ignored. Only the alpha channel is.

It effectively uses the "lighter" composition algorithm.

Yes. The spec should say that the layer uses lighter compositing with the real world (which is an opaque buffer). That way, it's clear what should happen and there is no longer a need to call out special alpha handling.
I will try to reformat the current text and propose a PR.

We should decide on what the requirements should be here, and update the spec text for environment blend mode and automatic composition accordingly. It may be worth having two modes here (additive and additive-alpha?). Note that this is not selective darkening, if we start seeing devices with selective darkening we may need yet another blend mode.

I suspect that selective darkening displays will behave like source-over devices.

@thetuvix
Copy link
Contributor

@cabanier:

Hololens and ML One composite the scene the same way. The confusion comes from having the alpha premultiplied with the color. This makes it so the compositor can simply ignore the alpha channel and send the premultipled RGB values to the projector.
So alpha is not ignored. Only the alpha channel is.

Specifically, HoloLens presumes that your RGBA buffer has premultiplied alpha, such that it can ignore the alpha channel and just send RGB to the display.

Are you saying here that Magic Leap also assumes premultiplied alpha?

  • If so, we're both conformant and we can be more explicit here in the spec.
  • If not (i.e. if Magic Leap presumes that the app's RGBA buffer is unpremultiplied), we have a gap that would prevent apps from appearing the same across both devices. In the most extreme case, a WebXR app whose shader outputs no alpha into the buffer, leaving it all 0, would appear as intended on HoloLens but show as an empty display on Magic Leap.

@Manishearth
Copy link
Contributor Author

A straightforward litmus test: if you were to draw a triangle with a=100% red and another with a=50% red on each device, would they look different? It seems like on ML the answer is yes, and on HL the answer is no, because HL ignores the alpha and expects you premultiply it. If this is accurate, then the current spec text is accurate for the HL but not the ML.

Ultimately, this is a difference in the composition and would need to be called out in the algorithm, even if you're supposed to premultiply it making the algorithm work the same. IMO authors need to be able to write code that works on both devices without having to figure out what the actual device is.

We have a couple paths forward:

  • Define an additive and additive-premultiplied pair of EBMs, where the latter expects premultiplied alpha values (what the HL does). We can bikeshed the names.
  • Define a separate property called expectsPremultiplied, which is false on everything but
  • Pick one option, force all implementations to match this. I don't really like this, since it introduces an unavoidable blit-like operation, and also privileges one approach.
    • To match the spec text as written, this would force webxr implementations for the ML to drop alpha values before sending textures to the device
    • To match the behavior the ML has by default, this would force webxr implementations for the HL to premultiply alpha values before sending textures to the device
  • Allow both, change the spec text around this to be a bit more vague and explicitly call out that both options are allowed. I dislike this since authors won't be able to write experiences that work equally well across devices.

@thetuvix
Copy link
Contributor

One further point of clarification is whether expecting "premultiplied" buffers means that only "valid" premultiplied RGBA pixels are accepted, where R, G and B are all <= A, or whether it's looser, with the A channel simply ignored, even if it's all zeroes. On HoloLens, there is no actual use of the app's A channel when scanning out to the primary displays. An A value of 0 is the same as an A value of 1, which led to the current phrasing.

@Manishearth
Copy link
Contributor Author

One further point of clarification is whether expecting "premultiplied" buffers means that only "valid" premultiplied RGBA pixels are accepted

I'd go by Postel's law on this, we shouldn't reject things here, as long as we can recommend a path for authors that leads to content working consistently. If authors start using alpha on premultiplied-expecting devices (which we can strongly advise against), the ensuing inconsistency is on them.

@thetuvix
Copy link
Contributor

@Manishearth

We should decide on what the requirements should be here, and update the spec text for environment blend mode and automatic composition accordingly. It may be worth having two modes here (additive and additive-alpha?). Note that this is not selective darkening, if we start seeing devices with selective darkening we may need yet another blend mode.

@cabanier:

I suspect that selective darkening displays will behave like source-over devices.

That's been my expectation as well - the app would be told its on an "alpha-blend" display and simply specify what RGB color and opacity it expects for each pixel. Video passthrough devices will service any opacity between 0% and 100%, while a selective darkening display might effectively compress to some narrower range based on whatever technology it uses. Both would provide a qualitatively different environment blending as compared to an additive display.

@thetuvix
Copy link
Contributor

Note that "alpha-blend" displays have the same ambiguity around whether the app's RGBA pixels should be interpreted as premultiplied alpha or unpremultiplied alpha. We need to specify how the UA blends on "alpha-blend" devices as well.

@toji: What's been your expectation around how "alpha-blend" devices like ARKit/ARCore phones will blend the app's RGBA pixels? Will the app be assumed to have provided premultiplied alpha or unpremultiplied alpha?

@cabanier
Copy link
Member

Note that "alpha-blend" displays have the same ambiguity around whether the app's RGBA pixels should be interpreted as premultiplied alpha or unpremultiplied alpha. We need to specify how the UA blends on "alpha-blend" devices as well.

Yes. We need to say how the compositing happens.

  • opaque does source-over compositing with opaque black

  • alpha-blend does source-over compositing with the opaque real world

  • additive does lighter compositing with the opaque real world

@toji: What's been your expectation around how "alpha-blend" devices like ARKit/ARCore phones will blend the app's RGBA pixels? Will the app be assumed to have provided premultiplied alpha or unpremultiplied alpha?

WebXR does not allow you to specify the premultipliedAlpha parameter so the buffer is always the default value: premultiplied. I don't think the WebXR spec must call this out (but it probably would be ok as a note)

It is undefined on the web platform how a canvas buffer should be represented in memory.

@cabanier
Copy link
Member

cabanier commented Aug 20, 2019

@cabanier:

Hololens and ML One composite the scene the same way. The confusion comes from having the alpha premultiplied with the color. This makes it so the compositor can simply ignore the alpha channel and send the premultipled RGB values to the projector.
So alpha is not ignored. Only the alpha channel is.

Specifically, HoloLens presumes that your RGBA buffer has premultiplied alpha, such that it can ignore the alpha channel and just send RGB to the display.

Are you saying here that Magic Leap also assumes premultiplied alpha?

Yes. AFAIK we don't support non-premultiplied but I would double check.
Does it matter?

@Manishearth
Copy link
Contributor Author

Yes. AFAIK we don't support non-premultiplied but I would double check.

Wait, so what happens if I draw a triangle in an alpha-enabled context with values 255, 0, 0, 128 vs 255, 0, 0, 255?

If it's the same thing then we already have matching behavior across devices, matching what the spec currently says.

@cabanier
Copy link
Member

Yes. AFAIK we don't support non-premultiplied but I would double check.

Wait, so what happens if I draw a triangle in an alpha-enabled context with values 255, 0, 0, 128 vs 255, 0, 0, 255?

If it's the same thing then we already have matching behavior across devices, matching what the spec currently says.

Regardless if alpha is enabled, you get a triangle with r = 128.
The spec matches implementation in spirit; we just need to be more explicit how the compositing works to avoid confusion.

@toji
Copy link
Member

toji commented Aug 20, 2019

@toji: What's been your expectation around how "alpha-blend" devices like ARKit/ARCore phones will blend the app's RGBA pixels?

I didn't actually have an expectation in this regard, but it sounds from @cabanier and @Manishearth's replies as if there's reasonable defacto behavior already in place (premultiplied) and we ought to just formalize it in the spec text.

@thetuvix
Copy link
Contributor

Thanks for the extra context! I believe we're aligned then here: XRWebGLLayer will always interpret app pixels as containing premultiplied alpha. We should be thoughtful before exposing more options there, as some native platforms may rely on that assumption for optimal performance. (e.g. to avoid buffer copies before feeding pixels into a scanout that ignores the alpha channel because it assumes alpha has already been multiplied in)

Just to be super specific and confirm our alignment for the example @Manishearth and @cabanier discussed:

  • The app renders a logically (255, 0, 0, 128) triangle. Because we documented WebXR's WebGL buffer as having premultipliedAlpha set to true, the app ensures their shader outputs (128, 0, 0, 128) into the buffer.
  • When that premultipliedAlpha WebGL buffer is handed off to the underlying native API, that API sees pixels with the value (128, 0, 0, 128).
  • The native platform's additive display pipeline either expects premultiplied alpha or discards alpha - either way, it ends up sending (128, 0, 0) on to its displays, showing a 50% red pixel blended with the real-world by physics using "lighter"-style blending.

Sounds good?

@cabanier
Copy link
Member

@toji: What's been your expectation around how "alpha-blend" devices like ARKit/ARCore phones will blend the app's RGBA pixels?

I didn't actually have an expectation in this regard, but it sounds from @cabanier and @Manishearth's replies as if there's reasonable defacto behavior already in place (premultiplied) and we ought to just formalize it in the spec text.

This would be in the WebXR spec. Do you think it should be added there as a note?

@cabanier
Copy link
Member

@toji I forgot that it's possible to use an existing WebGL layer and attach it to the session.

Does that mean that WebXR must support the other attributes defined by WebGL? Specifically, do we need to support premultipliedAlpha = false and preserveDrawingBuffer = true?

This looks related to issue 775

@thetuvix
Copy link
Contributor

Can we simpify our support matrix for WebXR 1.0 and just require that apps attaching their own WebGL framebuffer set premultipliedAlpha to true, equivalent to the behavior we'd specify for framebuffers created by WebXR itself?

@NellWaliczek: As an app building on top of WebXR, do you see anything in your engine that would be blocked by requiring use of premultipliedAlpha = true framebuffers with WebXR?

@cabanier
Copy link
Member

Can we simpify our support matrix for WebXR 1.0 and just require that apps attaching their own WebGL framebuffer set premultipliedAlpha to true, equivalent to the behavior we'd specify for framebuffers created by WebXR itself?

I think that would be a reasonable requirement.
Also, I suspect that preserveDrawingBuffer will have a high runtime cost. We should discuss this in next week's meeting. @NellWaliczek

@NellWaliczek
Copy link
Member

NellWaliczek commented Aug 22, 2019

@cabanier, absolutely. For future reference, to add an item to the agenda all you need to do is type "/" + "agenda" + " your comment describing why the topic needs discussion". I've done so below as an example, but if I've mis-captured the topic, please issue a pull request to the agenda file in the administrivia repo to correct it.

/agenda Discuss addingXRWebGLLayerInit.premultipliedAlpha and requiring it be set for XRWebGLLayers targeting 'immersive-ar' sessions.

@probot-label probot-label bot added the agenda label Aug 22, 2019
@Artyom17
Copy link

Artyom17 commented Aug 29, 2019

Answering the question asked about premultipled alpha in Oculus during the 08.27.19 call: Oculus SDKs (both PC and Mobile) can work with either premultiplied alpha or unpremultipled alpha.

FYI, OpenXR introduces the XR_COMPOSITION_LAYER_UNPREMULTIPLIED_ALPHA_BIT and every HW that supports OpenXR should support it, i.e. theoretically, we should give a choice whether the buffer contains premultiplied alpha rendering or not.

@rcabanier
Copy link
Contributor

Thanks @Artyom17!

From the call it sounded like everyone was ok that the WebGL buffer contained non-premultiplied data and we shouldn't throw if the adopted canvas had that option set.
Since this is a supported workflow, should we make it an option in XRWebGLLayerInit @toji ?

@thetuvix
Copy link
Contributor

Apologies for missing the call! I’ve been on vacation this week.

HoloLens 2 uses a hardware compositor that directly reads the app buffer’s RGB channels as the input into reprojection. Unlike VR headsets, there is no existing full-buffer GPU pass for distortion/etc. that can absorb an alpha multiply “for free”. While our OpenXR runtime will indeed support apps that insist on unpremultiplied layers, it will internally require an extra full-screen pass, which is quite costly on mobile SOCs.

As the industry moves towards battery-powered all-in-one devices, hardware compositors will become more common across XR devices, and avoiding fallback to a full-screen GPU pass will become ever more critical. If apps don’t have strong motivating scenarios to require unpremultiplied alpha, it seems likely that hardware compositors will decide to save the silicon they’d have used to implement it.

Does anyone know of VR/AR content today that relies fundamentally on submitting an unpremultiplied frame buffer? If not, it seems most forward-compatible to future hardware for WebXR to cleanly align around premultiplication from the start.

@rcabanier
Copy link
Contributor

rcabanier commented Aug 29, 2019

Magic Leap's AR compositor also does not support unpremultiplied layers.

However, the browser does not draw directly into that layer. Instead it draws to an intermediate canvas which is then blitted to the AR compositor's layer. I can see that Servo does the same thing.
Can the fixup for premultiplication be done during that blit?

@thetuvix , is your worry that UAs want to optimize this workflow and draw directly into these compositor layers?

Does anyone know of VR/AR content today that relies fundamentally on submitting an unpremultiplied frame buffer?

I'm unaware of such content

@thetuvix
Copy link
Contributor

Generally, HoloLens apps do use the platform’s buffer directly - this avoids an unnecessary blit, which can cost an app 1-2ms of its frame time on a mobile GPU.

@Manishearth can comment - my understanding is that Servo is doing a blit today for simplicity as they punch through on their OpenXR backend, but the ultimate intention to just draw directly into the OpenXR swapchain’s images. We should be sure that the design of WebXR does not prevent UAs from doing that.

@rcabanier
Copy link
Contributor

rcabanier commented Aug 29, 2019

Is this the goal for Chrome as well @klausw ?
I don't know how Firefox is put together. Does it draw directly into the compositor's buffers @kearwood?

@klausw
Copy link
Contributor

klausw commented Aug 30, 2019

@rcabanier , I don't have a strong preference myself, the main goal should be that there's a reasonable way for developers to get consistent and predictable results.

Just to make sure we're on the same page, I've made a simple immersive-ar test page that just writes a grid with fixed grayscale/alpha values to the viewport. Alpha is vertical (1 on top), rgb is horizontal (white on the left, black on the right) Edit: Alpha is horizontal (1 on right), rgb is vertical (white on top, black on the bottom) : https://storage.googleapis.com/glaretest1024/ar-barebones-alpha.html

Currently, Chrome Canary's experimental immersive-ar doesn't use premultiplied alpha, it's using glBlend(SRC_ALPHA, ONE_MINUS_SRC_ALPHA), so the result looks like this:

EDIT: See #14 (comment) for updated pictures

glBlend SRC_ALPHA, ONE_MINUS_SRC_ALPHA

Assuming I'm interpreting the consensus right, this should be switched to premultiplied alpha, or glBlend(ONE, ONE_MINUS_SRC_ALPHA). When using the source values as-is (allowing the RGB values to exceed the alpha value), that looks like this:

glBlend ONE, ONE_MINUS_SRC_ALPHA

For comparison, I modified Chrome's AR compositor to use glBlend(ONE, ONE) to simulate an additive display with premultiplied alpha (summing rgb values as-is for a "lighten" mode, ignoring alpha):
glBlend ONE, ONE

@rcabanier , does that last image look similar to how your headset renders the test page? (If yes, and if that would be helpful, I think it would be fairly simple to add a "simulate additive AR" developer flag to Chrome to help people test for this scenario on an alpha-blend device.)

Is there an expectation that the implementation should clamp the input rgba values to ensure that rgb values don't exceed the alpha value? This would be possible, but on the other hand I think that not enforcing this restriction may make the behavior closer to additive displays, where bright transparent pixels can drown out the source image in a "lighten"-like mode.

@rcabanier
Copy link
Contributor

@rcabanier , I don't have a strong preference myself, the main goal should be that there's a reasonable way for developers to get consistent and predictable results.

@thetuvix and @Artyom17 want to be able to draw directly into their compositor framebuffers. However according to Alex, it is significantly slower to draw to a non-premultiplied layer.
Magic Leap does not support this workflow and if we added it, we would likely suffer a similar cost.

Since this is a expensive workflow that is rarely used, should we just disallow it? It might be surprising to an author that setting this flag makes their scene much slower...

Just to make sure we're on the same page, I've made a simple immersive-ar test page that just writes a grid with fixed grayscale/alpha values to the viewport. Alpha is vertical (1 on top), rgb is horizontal (white on the left, black on the right): https://storage.googleapis.com/glaretest1024/ar-barebones-alpha.html

Currently, Chrome Canary's experimental immersive-ar doesn't use premultiplied alpha, it's using glBlend(SRC_ALPHA, ONE_MINUS_SRC_ALPHA), ...

It's surprising that your output is not using premultiplied. WebGL and WebXR use premultiplied by default and you are not setting any flags to change that. Is there an extra pass that removed the premultiplication?

For comparison, I modified Chrome's AR compositor to use glBlend(ONE, ONE) to simulate an additive display with premultiplied alpha (summing rgb values as-is for a "lighten" mode, ignoring alpha)
...
@rcabanier , does that last image look similar to how your headset renders the test page? (If yes, and if that would be helpful, I think it would be fairly simple to add a "simulate additive AR" developer flag to Chrome to help people test for this scenario on an alpha-blend device.)

The first row matches but the subsequent ones don't.
If your buffers were premultiplied, glBlend(ONE, ONE) would work, but in this case you want glBlend(SRC_ALPHA, DST_ALPHA) (or glBlend(SRC_ALPHA, ONE if the backdrop is opaque).

It would be great if Chrome could show how a scene would look on an additive display! 👍

Is there an expectation that the implementation should clamp the input rgba values to ensure that rgb values don't exceed the alpha value?

It is up to the author to decide how to generate the RGB values and they are allowed to make them bigger than alpha. (It might give unexpected results when you composite such data though)

This would be possible, but on the other hand I think that not enforcing this restriction may make the behavior closer to additive displays, where bright transparent pixels can drown out the source image in a "lighten"-like mode.

I don't quite follow. Can you elaborate?

@klausw
Copy link
Contributor

klausw commented Aug 30, 2019

@rcabanier wrote:

@thetuvix and @Artyom17 want to be able to draw directly into their compositor framebuffers. However according to Alex, it is significantly slower to draw to a non-premultiplied layer.
Magic Leap does not support this workflow and if we added it, we would likely suffer a similar cost.

Since this is a expensive workflow that is rarely used, should we just disallow it? It might be surprising to an author that setting this flag makes their scene much slower...

Sorry, I don't follow - which workflow would you want to disallow? A UNPREMULTIPLIED_ALPHA_BIT flag or similar?

The way I understood @thetuvix 's comments, Hololens basically lets applications render directly to an RGBA buffer, and this is displayed using the RGB values directly, ignoring the alpha channel. I interpreted that as being a "lighten" operation where the drawn pixels basically get added to the scene's natural light as seen through the transparent optics. The speed penalty would happen if the reprojection step were required to do an alpha multiply, it's not currently doing that and the expectation is that the application adjusts the RGB brightness via premultiply as appropriate.

It's surprising that your output is not using premultiplied. WebGL and WebXR use premultiplied by default and you are not setting any flags to change that. Is there an extra pass that removed the premultiplication?

The current Chrome implementation has a separate compositing step that blits the rendered image from the WebXR framebuffer onto the camera image, and that has a choice of blend modes how to interpret the RGBA values it gets from the framebuffer. It receives those exactly as written by the application, there's currently no place where a premultiplication would happen automatically.

@rcabanier , does that last image look similar to how your headset renders the test page? (If yes, and if that would be helpful, I think it would be fairly simple to add a "simulate additive AR" developer flag to Chrome to help people test for this scenario on an alpha-blend device.)

The first row matches but the subsequent ones don't.
If your buffers were premultiplied, glBlend(ONE, ONE) would work, but in this case you want glBlend(SRC_ALPHA, DST_ALPHA) (or glBlend(SRC_ALPHA, ONE if the backdrop is opaque).

Hm, I'm confused what's going on here. The test app directly uses glClearColor and a stencilled glClear to write specific values into the output framebuffer. In your implementation, when the app writes {r=1, g=1, b=1, a=0} to the color buffer (bottom left corner Edit: top left corner), does that show as fully transparent? To me, that sounds as if you're doing a postmultiply somewhere in your implementation, as opposed to interpreting the application-drawn values as premultiplied.

Based on @thetuvix 's description earlier, if Hololens ignores the alpha value and just uses the RGB values as-is, I'd expect it to show as white.

And yes, the Chrome implementation is treating the camera image backdrop as opaque, but I thought that the glBlendFunc's first argument must be ONE for premultiplied images - glBlendFunc(SRC_ALPHA, ...) would mean it's postmultiplying by alpha.

It's entirely possible that I'm misunderstanding things here though. In case we don't get this sorted out in this thread, maybe a topic for TPAC and experiments?

It would be great if Chrome could show how a scene would look on an additive display! 👍

No promises, this may be too niche as a feature in the main browser. (Some people feel Chrome has too many flags already, not sure why...) But it seems useful at least for an experimental build.

Is there an expectation that the implementation should clamp the input rgba values to ensure that rgb values don't exceed the alpha value?

It is up to the author to decide how to generate the RGB values and they are allowed to make them bigger than alpha. (It might give unexpected results when you composite such data though)

Glad to hear it, that saves a potentially-expensive intermediate step or extra shader complications.

This would be possible, but on the other hand I think that not enforcing this restriction may make the behavior closer to additive displays, where bright transparent pixels can drown out the source image in a "lighten"-like mode.

I don't quite follow. Can you elaborate?

Assuming I'm interpreting the Hololens mode right, it uses the input image's RGB values directly as added light. If using the proposed modified Chrome compositing implementation with glBlendFunc(ONE, ONE_MINUS_SRC_ALPHA), input pixels with alpha=0 should basically do the same thing, the output pixels are simply the (clamped) sum of the drawn pixels and the source image pixels. That's the last row in the second picture (alpha zero), and it should be the identical effect as the first row of the third picture (simulated additive mode). As far as I know there's no way to get this effect if there's a requirement that rgb values must not exceed the alpha value.

@klausw
Copy link
Contributor

klausw commented Aug 30, 2019

Edit: updated to fix direction references, I initially had an unexpected rotation in screenshots. See #14 (comment) for updated pictures and explanations.

Just to clarify, the test app at https://storage.googleapis.com/glaretest1024/ar-barebones-alpha.html is intended to show how raw pixel values drawn into the WebXR output framebuffer get interpreted.

The top right row shows opaque grayscale values with alpha=1.

The right bottom edge shows black pixels at varying opacity levels. (For an additive display, these would all be invisible.)

The diagonal from top left to bottom right bottom left to top right shows white pixels that are premultiplied with a range of alpha values, (1,1,1,1) at top left, (0.5, 0.5, 0.5, 0.5) in the center, and (0,0,0,0) at bottom right top right.

The area below~/left of~ the diagonal has pixel values where the RGB values are greater than the alpha value. Those technically aren't valid premultiplied values, unless you intepret them as the result of premultiplying an HDR input pixel with a source RGB value outside the (0..1) range. The test app is intentionally keeping those as-is to see how they get interpreted. As Rik said, "It is up to the author to decide how to generate the RGB values and they are allowed to make them bigger than alpha", and this corner of the test is intended to show what happens when they do that.

@rcabanier
Copy link
Contributor

rcabanier commented Sep 6, 2019

@toji covered this topic when failIfMajorPerformanceCaveat was first introduced. His strong advice is really that you should only be setting this flag when you've built some alternate rendering path that you'll go down when your WebGL context creation fails. In this case, it's unlikely that the app would have some alternate non-WebGL rendering path to whip out for their WebXR app - instead, the app should probably have just premultiplied its alpha.

To have efficient rendering on the Magic Leap platform, the current design of WebXR (and WebGL?) needs to change. Our target is a texture array and this is also the case for Oculus and Hololens (?). WebXR/GL always renders to a single large texture.

For the current WebXR design, we can support a larger feature set because we'll do the extra conversion steps to make it all happen. A drawback is that it's not as fast as it could be.

Once we ship, we should continue to investigate giving access to texture arrays and for those we will support just the common denominator and really focus on speed.

@klausw
Copy link
Contributor

klausw commented Sep 6, 2019 via email

@rcabanier
Copy link
Contributor

... it increasingly seems that there's no action needed now. (Except for clarifying the current expected behavior.)

We would need to add the throwing behavior to the spec. Otherwise authors will be confused that their canvas renders differently in WebGL than WebXR

@AdaRoseCannon
Copy link
Member

Sorry for removing the label I was doing some 'agenda' label clean up after the call I can re add it if you need?

/agenda Discuss adding XRWebGLLayerInit.premultipliedAlpha and requiring it be set for XRWebGLLayers targeting 'immersive-ar' sessions.

@probot-label probot-label bot added the agenda label Sep 6, 2019
@thetuvix
Copy link
Contributor

thetuvix commented Sep 6, 2019

@klausw:

That's a good point, and I think this also adds another reason that if it's
configurable, it should logically be a property of the layer, not of the
underlying GL context. Several XRWebGLLayers might share the same GL
context but want to use different blend modes and attributes for them.

For what it's worth, the equivalent OpenXR option, XR_COMPOSITION_LAYER_UNPREMULTIPLIED_ALPHA_BIT is a per-layer option. For apps targeting HoloLens, we are developing a validation layer that will warn developers if this option is used. That validation approach can work for HoloLens-specific apps deployed to a store, but it's not viable for WebXR content published broadly to the web.

@klausw:

To expand on that a bit, I think it's misleading to say that the existing WebGL premultipliedAlpha getContext attribute is a general property of a WebGL context since (AFAIK) it doesn't change the semantics of any WebGL commands.

@rcabanier:

We would need to add the throwing behavior to the spec. Otherwise authors will be confused that their canvas renders differently in WebGL than WebXR

The primary thing we need to do is clearly specify that WebXR always interprets framebuffers as having premultiplied alpha.

Beyond that, what does the premultipliedAlpha WebGL context option affect in a UA today? @klausw has stated that it doesn't affect how a sequence of draw calls render within the confines of a given buffer. Does that context option solely affect how the context's flattened buffers are then composed with other DOM elements? If so, that is logically similar to the environment blending being discussed here, although I'd like to hear from those with more experience in WebGL around how this all fits with their expectations.

@klausw
Copy link
Contributor

klausw commented Sep 6, 2019

@rcabanier :

We would need to add the throwing behavior to the spec. Otherwise authors will be confused that their canvas renders differently in WebGL than WebXR

I think the spec would only need to document throwing behavior if there actually is a way to ask for an unsupported configuration, and I thought we were moving in a direction where that wasn't possible?

If we agree that the existing canvas.getContext(..., {premultipliedAlpha: false}) attribute only applies to the default framebuffer, not to the XRWebGLLayer, the spec would just need to clarify that and document that the XRWebGLLayer framebuffer is always interpreted as premultiplied. I don't think it would be appropriate to throw an exception in this case since it would be a well-defined (if unusual) configuration if an application uses different premultiplication modes for its two simultaneous output paths, though a console warning along the lines of "FYI, you're doing something strange, are you sure?" may be helpful.

Or did you mean that we should add a new XRWebGLLayerInit.premultipliedAlpha option just for the purpose of throwing an exception when someone tries to set it to something other than the default value?

@thetuvix:

Beyond that, what does the premultipliedAlpha WebGL context option affect in a UA today? @klausw has stated that it doesn't affect how a sequence of draw calls render within the confines of a given buffer. Does that context option solely affect how the context's flattened buffers are then composed with other DOM elements? If so, that is logically similar to the environment blending being discussed here, although I'd like to hear from those with more experience in WebGL around how this all fits with their expectations.

Yes, to the best of my knowledge this really only affects how the canvas content is composited. When using a source pixel from the backing default framebuffer in a blend equation that has a (source_alpha * source_color) term, the compositor does this multiplication if premultipliedAlpha is false, but if premultipliedAlpha is true it assumes the producer already did so and uses the color as-is. (The source_alpha may still be used for other purposes such as a (1 - source_alpha) multiplier of the background color.)

In case it helps, https://jsfiddle.net/1mtesp3v/ shows the effect of changing the premultipliedAlpha attribute for canvas compositing:
compositing example

@cabanier
Copy link
Member

cabanier commented Sep 7, 2019

@rcabanier :

We would need to add the throwing behavior to the spec. Otherwise authors will be confused that their canvas renders differently in WebGL than WebXR

I think the spec would only need to document throwing behavior if there actually is a way to ask for an unsupported configuration, and I thought we were moving in a direction where that wasn't possible?

No, it is possible. An author can create a canvas with premultiplied=false and hand it to WebXR.
We need to throw when that happens.

If we agree that the existing canvas.getContext(..., {premultipliedAlpha: false}) attribute only applies to the default framebuffer, not to the XRWebGLLayer, the spec would just need to clarify that and document that the XRWebGLLayer framebuffer is always interpreted as premultiplied. I don't think it would be appropriate to throw an exception in this case since it would be a well-defined (if unusual) configuration if an application uses different premultiplication modes for its two simultaneous output paths, though a console warning along the lines of "FYI, you're doing something strange, are you sure?" may be helpful.

If the author shows a preview of the canvas on the page (which is very common) and set premultiplied=false then if we start a WebXR with that canvas, the colors will look wrong.
This will be unexpected so either we support this workflow (which is not desirable) or we throw.

Or did you mean that we should add a new XRWebGLLayerInit.premultipliedAlpha option just for the purpose of throwing an exception when someone tries to set it to something other than the default value?

No. Only when the author creates it outside of WebXR.

@thetuvix:

Beyond that, what does the premultipliedAlpha WebGL context option affect in a UA today? @klausw has stated that it doesn't affect how a sequence of draw calls render within the confines of a given buffer. Does that context option solely affect how the context's flattened buffers are then composed with other DOM elements? If so, that is logically similar to the environment blending being discussed here, although I'd like to hear from those with more experience in WebGL around how this all fits with their expectations.

Yes, to the best of my knowledge this really only affects how the canvas content is composited. When using a source pixel from the backing default framebuffer in a blend equation that has a (source_alpha * source_color) term, the compositor does this multiplication if premultipliedAlpha is false, but if premultipliedAlpha is true it assumes the producer already did so and uses the color as-is. (The source_alpha may still be used for other purposes such as a (1 - source_alpha) multiplier of the background color.)

I think @thetuvix means: what happens if we start a WebXR session with such a canvas? Is the alpha multiplied with the color or is it ignored?
I think for Magic Leap, the alpha is ignored...

@klausw
Copy link
Contributor

klausw commented Sep 7, 2019 via email

@klausw
Copy link
Contributor

klausw commented Sep 7, 2019

To clarify, what I said above applies to immersive sessions, and I was assuming that this is what we were talking about in this issue.

Inline WebXR sessions do reuse the canvas and its default framebuffer, and for inline sessions the existing premultipliedAlpha setting from context creation is applicable and should be respected, but this would also be consistent with the logic discussed here since inline sessions ignore the XRWebGLLayerInit attributes. Would it be necessary and appropriate to throw exceptions for inline sessions, or could the user agent just apply the setting and do an alpha multiply?

@Manishearth
Copy link
Contributor Author

(Sorry for missing lots of this discussion, I was on vacation for a week and a half and then was catching up with stuff for a while)

To me it seems like the current status is:

  • We need to discuss whether premultipliedAlpha on XRWebGLLayerInit makes sense for all webxr (this is on the agenda)
    • Decide if we want to have it at all
    • What happens with fast/slow paths if we have this?
    • What happens when we support compositing multiple layers?
  • We're largely on agreement as to how AR additive works, but we perhaps need to clarify AR spec text about this.
  • (Did I miss anything?)

Maybe we can solve the second one with a PR to close out this issue and open a new one specifically for the premultiplied alpha thing? This discussion has gotten quite long and has touched on a variety of topics at this point 😄

@klausw
Copy link
Contributor

klausw commented Sep 9, 2019

@Manishearth , I agree with your list, though I'd add one more item to the discussion:

  • clarify what's supposed to happen if an application uses the pre-existing canvas.getContext(..., {premultipliedAlpha: false}) flag.
    • Do we agree that this flag is only intended to apply to the default framebuffer used for canvas composition, and that WebXR immersive mode's XRWebGLLayer framebuffer has either an implicit value of premultipliedAlpha=true, or a separate XRWebGLLayerInit flag?
    • Should this trigger a diagnostic or exception if an immersive session uses an existing context that had specified this in combination with an XRWebGLLayer that (implicitly or explicitly) uses premultipliedAlpha=true, or is it sufficient to address this in documentation?

@Manishearth
Copy link
Contributor Author

Another option is to fail makeXRCompatible() there, perhaps?

@cwilso
Copy link
Member

cwilso commented Sep 9, 2019

/facetoface

@Manishearth
Copy link
Contributor Author

Filed two issues about premultiplied alpha on the layer as immersive-web/webxr#837 immersive-web/webxr#838.

As I understand it we just need to mention something about alpha values being premultiplied in the spec to solve any remaining ambiguity and close this issue. I'll make a PR.

@Manishearth
Copy link
Contributor Author

Potential fix in #25. I may have missed something.

Manishearth added a commit to Manishearth/webxr-ar-module that referenced this issue Sep 12, 2019
@Manishearth
Copy link
Contributor Author

Closing, seems like everything here can be fixed by the alpha issues filed on the core spec.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging a pull request may close this issue.

10 participants