-
Notifications
You must be signed in to change notification settings - Fork 20
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Figure out what the additive blend mode should do with alpha values #14
Comments
Hololens and ML One composite the scene the same way. The confusion comes from having the alpha premultiplied with the color. This makes it so the compositor can simply ignore the alpha channel and send the premultipled RGB values to the projector.
Yes. The spec should say that the layer uses
I suspect that selective darkening displays will behave like source-over devices. |
Specifically, HoloLens presumes that your RGBA buffer has premultiplied alpha, such that it can ignore the alpha channel and just send RGB to the display. Are you saying here that Magic Leap also assumes premultiplied alpha?
|
A straightforward litmus test: if you were to draw a triangle with a=100% red and another with a=50% red on each device, would they look different? It seems like on ML the answer is yes, and on HL the answer is no, because HL ignores the alpha and expects you premultiply it. If this is accurate, then the current spec text is accurate for the HL but not the ML. Ultimately, this is a difference in the composition and would need to be called out in the algorithm, even if you're supposed to premultiply it making the algorithm work the same. IMO authors need to be able to write code that works on both devices without having to figure out what the actual device is. We have a couple paths forward:
|
One further point of clarification is whether expecting "premultiplied" buffers means that only "valid" premultiplied RGBA pixels are accepted, where R, G and B are all <= A, or whether it's looser, with the A channel simply ignored, even if it's all zeroes. On HoloLens, there is no actual use of the app's A channel when scanning out to the primary displays. An A value of 0 is the same as an A value of 1, which led to the current phrasing. |
I'd go by Postel's law on this, we shouldn't reject things here, as long as we can recommend a path for authors that leads to content working consistently. If authors start using alpha on premultiplied-expecting devices (which we can strongly advise against), the ensuing inconsistency is on them. |
That's been my expectation as well - the app would be told its on an |
Note that @toji: What's been your expectation around how |
Yes. We need to say how the compositing happens.
WebXR does not allow you to specify the premultipliedAlpha parameter so the buffer is always the default value: premultiplied. I don't think the WebXR spec must call this out (but it probably would be ok as a note) It is undefined on the web platform how a canvas buffer should be represented in memory. |
Yes. AFAIK we don't support non-premultiplied but I would double check. |
Wait, so what happens if I draw a triangle in an alpha-enabled context with values If it's the same thing then we already have matching behavior across devices, matching what the spec currently says. |
Regardless if alpha is enabled, you get a triangle with r = 128. |
I didn't actually have an expectation in this regard, but it sounds from @cabanier and @Manishearth's replies as if there's reasonable defacto behavior already in place (premultiplied) and we ought to just formalize it in the spec text. |
Thanks for the extra context! I believe we're aligned then here: Just to be super specific and confirm our alignment for the example @Manishearth and @cabanier discussed:
Sounds good? |
This would be in the WebXR spec. Do you think it should be added there as a note? |
@toji I forgot that it's possible to use an existing WebGL layer and attach it to the session. Does that mean that WebXR must support the other attributes defined by WebGL? Specifically, do we need to support premultipliedAlpha = false and preserveDrawingBuffer = true? This looks related to issue 775 |
Can we simpify our support matrix for WebXR 1.0 and just require that apps attaching their own WebGL framebuffer set @NellWaliczek: As an app building on top of WebXR, do you see anything in your engine that would be blocked by requiring use of |
I think that would be a reasonable requirement. |
@cabanier, absolutely. For future reference, to add an item to the agenda all you need to do is type "/" + "agenda" + " your comment describing why the topic needs discussion". I've done so below as an example, but if I've mis-captured the topic, please issue a pull request to the agenda file in the administrivia repo to correct it. /agenda Discuss adding |
Answering the question asked about premultipled alpha in Oculus during the 08.27.19 call: Oculus SDKs (both PC and Mobile) can work with either premultiplied alpha or unpremultipled alpha. FYI, OpenXR introduces the XR_COMPOSITION_LAYER_UNPREMULTIPLIED_ALPHA_BIT and every HW that supports OpenXR should support it, i.e. theoretically, we should give a choice whether the buffer contains premultiplied alpha rendering or not. |
Thanks @Artyom17! From the call it sounded like everyone was ok that the WebGL buffer contained non-premultiplied data and we shouldn't throw if the adopted canvas had that option set. |
Apologies for missing the call! I’ve been on vacation this week. HoloLens 2 uses a hardware compositor that directly reads the app buffer’s RGB channels as the input into reprojection. Unlike VR headsets, there is no existing full-buffer GPU pass for distortion/etc. that can absorb an alpha multiply “for free”. While our OpenXR runtime will indeed support apps that insist on unpremultiplied layers, it will internally require an extra full-screen pass, which is quite costly on mobile SOCs. As the industry moves towards battery-powered all-in-one devices, hardware compositors will become more common across XR devices, and avoiding fallback to a full-screen GPU pass will become ever more critical. If apps don’t have strong motivating scenarios to require unpremultiplied alpha, it seems likely that hardware compositors will decide to save the silicon they’d have used to implement it. Does anyone know of VR/AR content today that relies fundamentally on submitting an unpremultiplied frame buffer? If not, it seems most forward-compatible to future hardware for WebXR to cleanly align around premultiplication from the start. |
Magic Leap's AR compositor also does not support unpremultiplied layers. However, the browser does not draw directly into that layer. Instead it draws to an intermediate canvas which is then blitted to the AR compositor's layer. I can see that Servo does the same thing. @thetuvix , is your worry that UAs want to optimize this workflow and draw directly into these compositor layers?
I'm unaware of such content |
Generally, HoloLens apps do use the platform’s buffer directly - this avoids an unnecessary blit, which can cost an app 1-2ms of its frame time on a mobile GPU. @Manishearth can comment - my understanding is that Servo is doing a blit today for simplicity as they punch through on their OpenXR backend, but the ultimate intention to just draw directly into the OpenXR swapchain’s images. We should be sure that the design of WebXR does not prevent UAs from doing that. |
@rcabanier , I don't have a strong preference myself, the main goal should be that there's a reasonable way for developers to get consistent and predictable results. Just to make sure we're on the same page, I've made a simple immersive-ar test page that just writes a grid with fixed grayscale/alpha values to the viewport. Currently, Chrome Canary's experimental EDIT: See #14 (comment) for updated pictures Assuming I'm interpreting the consensus right, this should be switched to premultiplied alpha, or For comparison, I modified Chrome's AR compositor to use @rcabanier , does that last image look similar to how your headset renders the test page? (If yes, and if that would be helpful, I think it would be fairly simple to add a "simulate additive AR" developer flag to Chrome to help people test for this scenario on an alpha-blend device.) Is there an expectation that the implementation should clamp the input rgba values to ensure that rgb values don't exceed the alpha value? This would be possible, but on the other hand I think that not enforcing this restriction may make the behavior closer to additive displays, where bright transparent pixels can drown out the source image in a "lighten"-like mode. |
@thetuvix and @Artyom17 want to be able to draw directly into their compositor framebuffers. However according to Alex, it is significantly slower to draw to a non-premultiplied layer. Since this is a expensive workflow that is rarely used, should we just disallow it? It might be surprising to an author that setting this flag makes their scene much slower...
It's surprising that your output is not using premultiplied. WebGL and WebXR use premultiplied by default and you are not setting any flags to change that. Is there an extra pass that removed the premultiplication?
The first row matches but the subsequent ones don't. It would be great if Chrome could show how a scene would look on an additive display! 👍
It is up to the author to decide how to generate the RGB values and they are allowed to make them bigger than alpha. (It might give unexpected results when you composite such data though)
I don't quite follow. Can you elaborate? |
@rcabanier wrote:
Sorry, I don't follow - which workflow would you want to disallow? A UNPREMULTIPLIED_ALPHA_BIT flag or similar? The way I understood @thetuvix 's comments, Hololens basically lets applications render directly to an RGBA buffer, and this is displayed using the RGB values directly, ignoring the alpha channel. I interpreted that as being a "lighten" operation where the drawn pixels basically get added to the scene's natural light as seen through the transparent optics. The speed penalty would happen if the reprojection step were required to do an alpha multiply, it's not currently doing that and the expectation is that the application adjusts the RGB brightness via premultiply as appropriate.
The current Chrome implementation has a separate compositing step that blits the rendered image from the WebXR framebuffer onto the camera image, and that has a choice of blend modes how to interpret the RGBA values it gets from the framebuffer. It receives those exactly as written by the application, there's currently no place where a premultiplication would happen automatically.
Hm, I'm confused what's going on here. The test app directly uses glClearColor and a stencilled glClear to write specific values into the output framebuffer. In your implementation, when the app writes {r=1, g=1, b=1, a=0} to the color buffer ( Based on @thetuvix 's description earlier, if Hololens ignores the alpha value and just uses the RGB values as-is, I'd expect it to show as white. And yes, the Chrome implementation is treating the camera image backdrop as opaque, but I thought that the glBlendFunc's first argument must be ONE for premultiplied images - It's entirely possible that I'm misunderstanding things here though. In case we don't get this sorted out in this thread, maybe a topic for TPAC and experiments?
No promises, this may be too niche as a feature in the main browser. (Some people feel Chrome has too many flags already, not sure why...) But it seems useful at least for an experimental build.
Glad to hear it, that saves a potentially-expensive intermediate step or extra shader complications.
Assuming I'm interpreting the Hololens mode right, it uses the input image's RGB values directly as added light. If using the proposed modified Chrome compositing implementation with |
Edit: updated to fix direction references, I initially had an unexpected rotation in screenshots. See #14 (comment) for updated pictures and explanations. Just to clarify, the test app at https://storage.googleapis.com/glaretest1024/ar-barebones-alpha.html is intended to show how raw pixel values drawn into the WebXR output framebuffer get interpreted. The The The diagonal from The area below~/left of~ the diagonal has pixel values where the RGB values are greater than the alpha value. Those technically aren't valid premultiplied values, unless you intepret them as the result of premultiplying an HDR input pixel with a source RGB value outside the (0..1) range. The test app is intentionally keeping those as-is to see how they get interpreted. As Rik said, "It is up to the author to decide how to generate the RGB values and they are allowed to make them bigger than alpha", and this corner of the test is intended to show what happens when they do that. |
To have efficient rendering on the Magic Leap platform, the current design of WebXR (and WebGL?) needs to change. Our target is a texture array and this is also the case for Oculus and Hololens (?). WebXR/GL always renders to a single large texture. For the current WebXR design, we can support a larger feature set because we'll do the extra conversion steps to make it all happen. A drawback is that it's not as fast as it could be. Once we ship, we should continue to investigate giving access to texture arrays and for those we will support just the common denominator and really focus on speed. |
Once we have multiple XRWebGLLayers, they will composite with each other
at which point *premultipliedAlpha* should be take into account.
That's a good point, and I think this also adds another reason that if it's
configurable, it should logically be a property of the layer, not of the
underlying GL context. Several XRWebGLLayers might share the same GL
context but want to use different blend modes and attributes for them.
Let's always require it. If authors want to have it, we can always relax
it later (or authors can implement it themselves!)
That sounds reasonable to me. I get the impression that Nell's suggestion
to add a new flag was a reaction to Alex indicating we need to handle this,
which may in turn have been based on the assumption that the existing
canvas getContext flag should affect XR compositing, but it increasingly
seems that there's no action needed now. (Except for clarifying the current
expected behavior.)
|
We would need to add the throwing behavior to the spec. Otherwise authors will be confused that their canvas renders differently in WebGL than WebXR |
Sorry for removing the label I was doing some 'agenda' label clean up after the call I can re add it if you need? /agenda Discuss adding |
For what it's worth, the equivalent OpenXR option,
The primary thing we need to do is clearly specify that WebXR always interprets framebuffers as having premultiplied alpha. Beyond that, what does the |
I think the spec would only need to document throwing behavior if there actually is a way to ask for an unsupported configuration, and I thought we were moving in a direction where that wasn't possible? If we agree that the existing Or did you mean that we should add a new XRWebGLLayerInit.premultipliedAlpha option just for the purpose of throwing an exception when someone tries to set it to something other than the default value?
Yes, to the best of my knowledge this really only affects how the canvas content is composited. When using a source pixel from the backing default framebuffer in a blend equation that has a (source_alpha * source_color) term, the compositor does this multiplication if premultipliedAlpha is false, but if premultipliedAlpha is true it assumes the producer already did so and uses the color as-is. (The source_alpha may still be used for other purposes such as a (1 - source_alpha) multiplier of the background color.) In case it helps, https://jsfiddle.net/1mtesp3v/ shows the effect of changing the |
No, it is possible. An author can create a canvas with
If the author shows a preview of the canvas on the page (which is very common) and set
No. Only when the author creates it outside of WebXR.
I think @thetuvix means: what happens if we start a WebXR session with such a canvas? Is the alpha multiplied with the color or is it ignored? |
On Fri, Sep 6, 2019, 21:22 Rik Cabanier ***@***.***> wrote:
@rcabanier <https://github.com/rcabanier> :
We would need to add the throwing behavior to the spec. Otherwise authors
will be confused that their canvas renders differently in WebGL than WebXR
I think the spec would only need to document throwing behavior if there
actually is a way to ask for an unsupported configuration, and I thought we
were moving in a direction where that wasn't possible?
No, it is possible. An author can create a canvas with premultiplied=false
and hand it to WebXR.
We need to throw when that happens.
Unless I'm missing something, there is no way to hand a canvas to WebXR or
to tell WebXR to directly send the existing content of a canvas's backing
default framebuffer to WebXR.
When creating a XRWebGLLayer, you can pass it a GL *context* (not canvas),
and this creates a new opaque framebuffer with attributes based on
XRWebGLLayerInit. That's a new framebuffer object in this WebGL context
separate from the default framebuffer backing the canvas, with separate
attributes, it does not inherit any buffer properties from the default
framebuffer. If you want content to appear in WebXR, you need to draw them
into that XRWebGLLayer framebuffer as a separate step, though you can reuse
existing resources from the WebGL context. At minimum, you'd need to blit
pixels. At this time, the application needs to apply a premultiply if
necessary as part of this new drawing operation if it's using transparency.
Since this new destination framebuffer doesn't share any configuration
attributes with the default framebuffer, i.e. antialias and depth are all
supplied separately via XRWebGLLayerInit, it seems consistent to also
interpret premultipliedAlpha as a framebuffer property (in the sense of
being metadata telling the consumer of that buffer how to interpret it),
analogous to how OpenXR treats it as a layer property. See also previous
comments, I think it isn't very useful to treat premultipliedAlpha as a
property of the WebGL context as a whole. For example, an app would be free
to create its own framebuffer objects and use them with or without
premultiplying as part of its overall render pipeline.
If we agree that the existing canvas.getContext(..., {premultipliedAlpha:
false}) attribute only applies to the default framebuffer, not to the
XRWebGLLayer, the spec would just need to clarify that and document that
the XRWebGLLayer framebuffer is always interpreted as premultiplied. I
don't think it would be appropriate to throw an exception in this case
since it would be a well-defined (if unusual) configuration if an
application uses different premultiplication modes for its two simultaneous
output paths, though a console warning along the lines of "FYI, you're
doing something strange, are you sure?" may be helpful.
If the author shows a preview of the canvas on the page (which is very
common) and set premultiplied=false then if we start a WebXR with that
canvas, the colors will look wrong.
This will be unexpected so either we support this workflow (which is not
desirable) or we throw.
Or did you mean that we should add a new
XRWebGLLayerInit.premultipliedAlpha option just for the purpose of throwing
an exception when someone tries to set it to something other than the
default value?
No. Only when the author creates it outside of WebXR.
@thetuvix <https://github.com/thetuvix>:
Beyond that, what does the premultipliedAlpha WebGL context option affect
in a UA today? @klausw <https://github.com/klausw> has stated that it
doesn't affect how a sequence of draw calls render within the confines of a
given buffer. Does that context option solely affect how the context's
flattened buffers are then composed with other DOM elements? If so, that is
logically similar to the environment blending being discussed here,
although I'd like to hear from those with more experience in WebGL around
how this all fits with their expectations.
Yes, to the best of my knowledge this really only affects how the canvas
content is composited. When using a source pixel from the backing default
framebuffer in a blend equation that has a (source_alpha * source_color)
term, the compositor does this multiplication if premultipliedAlpha is
false, but if premultipliedAlpha is true it assumes the producer already
did so and uses the color as-is. (The source_alpha may still be used for
other purposes such as a (1 - source_alpha) multiplier of the background
color.)
I think @thetuvix <https://github.com/thetuvix> means: what happens if we
start a WebXR session with such a canvas? Is the alpha multiplied with the
color or is it ignored?
I think for Magic Leap, the alpha is ignored...
I think this is a bit misleading since there's no direct way to reuse
previously drawn pixels from the canvas. Since drawing to the new WebXR
buffer needs a separate drawing step, I think it's not unreasonable to
require following the premultiply requirements for this new drawing
destination. I'd be in favor of a console warning if the default
framebuffer uses premultipliedAlpha=false to remind developers, but an
exception seems excessive since the developer may be doing the right thing
after all.
… —
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#14?email_source=notifications&email_token=AAKT5XZLPQP7LDPZNIHJ2HLQIMUAZA5CNFSM4IME24EKYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOD6EPYWA#issuecomment-529071192>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AAKT5X5D25AJZWRTBH2T76TQIMUAZANCNFSM4IME24EA>
.
|
To clarify, what I said above applies to immersive sessions, and I was assuming that this is what we were talking about in this issue. Inline WebXR sessions do reuse the canvas and its default framebuffer, and for inline sessions the existing premultipliedAlpha setting from context creation is applicable and should be respected, but this would also be consistent with the logic discussed here since inline sessions ignore the XRWebGLLayerInit attributes. Would it be necessary and appropriate to throw exceptions for inline sessions, or could the user agent just apply the setting and do an alpha multiply? |
(Sorry for missing lots of this discussion, I was on vacation for a week and a half and then was catching up with stuff for a while) To me it seems like the current status is:
Maybe we can solve the second one with a PR to close out this issue and open a new one specifically for the premultiplied alpha thing? This discussion has gotten quite long and has touched on a variety of topics at this point 😄 |
@Manishearth , I agree with your list, though I'd add one more item to the discussion:
|
Another option is to fail |
/facetoface |
Filed two issues about premultiplied alpha on the layer as immersive-web/webxr#837 immersive-web/webxr#838. As I understand it we just need to mention something about alpha values being premultiplied in the spec to solve any remaining ambiguity and close this issue. I'll make a PR. |
Potential fix in #25. I may have missed something. |
Closing, seems like everything here can be fixed by the alpha issues filed on the core spec. |
https://immersive-web.github.io/webxr-ar-module/#dom-xrenvironmentblendmode-additive
See #12 (comment) and #12 (comment)
It seems like the Hololens does what the spec says for its primary display (it includes alpha when compositing screencaps), however the Magic Leap is capable of differentiating between different alpha values used in the rendered scene, i.e. 50% red looks different from 100% red. It effectively uses the "lighter" composition algorithm.
We should decide on what the requirements should be here, and update the spec text for environment blend mode and automatic composition accordingly. It may be worth having two modes here (additive and additive-alpha?). Note that this is not selective darkening, if we start seeing devices with selective darkening we may need yet another blend mode.
cc @thetuvix @cabanier
The text was updated successfully, but these errors were encountered: