Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Clarify lifetime of textures from layers #101

Closed
toji opened this issue Apr 23, 2020 · 68 comments
Closed

Clarify lifetime of textures from layers #101

toji opened this issue Apr 23, 2020 · 68 comments

Comments

@toji
Copy link
Member

toji commented Apr 23, 2020

Similar to #96, we need to make it clear that textures that are returned from getSubImage()/getViewSubImage() are not accessible after the frame has ended. Seems like the most reasonable path to this is to mark the WebGL texture handle as deleted once the texture has been consumed by the compositor, but we'll also need to sanity check the behavior of cases where the texture is bound either as an active texture or as a framebuffer attachment.

@asajeffrey
Copy link

Marking the texture as deleted in WebGL but not deleting it in GL (because the texture may have been provided by the device) means introducing yet another place where WebGL state and GL state disagree.

@cabanier
Copy link
Member

There was already some text in the spec but I attempted to clarify it in PR #100

@toji
Copy link
Member Author

toji commented Apr 23, 2020

Marking the texture as deleted in WebGL but not deleting it in GL (because the texture may have been provided by the device) means introducing yet another place where WebGL state and GL state disagree.

Yeah, and I'm always reluctant to introduce deviations like that. Given that the texture is going to be consumed by the compositor I don't know a more elegant path, though. The only thing that functions that way in WebGL today is the default framebuffer, and I think that our opaque framebuffer concept for the XRWebGLLayer already stretched the limits of what I felt comfortable with in terms of adding new logic to WebGL. This still feels like a smaller delta than that, at least. If you have suggestions on how to improve it I'd be thrilled to hear them!

Good news longer term is that it seems like WebGPU will give us tools to handle this kind of swapchain-like behavior much more directly.

@cabanier
Copy link
Member

Marking the texture as deleted after the frame ends seems like a reasonable approach.
getSubImage and getViewSubImage would then signal texture creation.

@Artyom17
Copy link
Contributor

Artyom17 commented Apr 23, 2020

This brings another issue: 'static' layers (mentioned in #49). There are certain layers which are not updated at all, or updated not every frame. What is our approach in terms of texture lifetime for such cases?
For example, if we know the layer is 'static' we can allocate less textures in the swapchain (1 instead of 3, for example).
However, if we don't update that every frame, there could be a situation when the compositor may for some reasons discard or replace the texture and the content will be lost.

@Artyom17
Copy link
Contributor

I think, we may have the following categories of layers:

  • updated every frame
  • updated only occasionally
  • never updated, populated only once.
    What is happening with the textures of the 2nd and 3rd categories? Do we allow such layers? What if compositor discards the texture for these categories, how do we communicate this event back to the experience to make it re-populate the layers ('ondiscard' event?)

@cabanier
Copy link
Member

cabanier commented Apr 23, 2020

This brings another issue: 'static' layers (mentioned in #49). There are certain layers which are not updated at all, or updated not every frame. What is our approach in terms of texture lifetime for such cases?

That is already covered by the spec. A layer is only updated if the author calls getSubImage or getViewSubImage. There are some hints for best practices but they should be updated.

@Artyom17
Copy link
Contributor

Uhm... I don't think it is covered enough. If we do not update the layer (i.e. don't call getSubImage or getViewSubImage) - what happens to the texture's content? The intent is to have it unchanged, right?
However, compositor may have its own plans on the texture. For example, the compositor may discard it and require UA to re-allocate it. I probably should open another issue because I recently hit it and I am a bit stuck and don't know how to resolve it gracefully.

@cabanier
Copy link
Member

Uhm... I don't think it is covered enough. If we do not update the layer (i.e. don't call getSubImage or getViewSubImage) - what happens to the texture's content?

It's the same as what happens with XRWebGLLayer, the compositor may choose to reproject the content based on the new pose.

The intent is to have it unchanged, right?

That's really up to the compositor. Once the XR animation frame ends, the compositor takes control.

However, compositor may have its own plans on the texture. For example, the compositor may discard it and require UA to re-allocate it. I probably should open another issue because I recently hit it and I am a bit stuck and don't know how to resolve it gracefully.

Yes, this seems to be a different issue.

@Artyom17
Copy link
Contributor

Yes, this seems to be a different issue.

#106

@RafaelCintron
Copy link

RafaelCintron commented Apr 29, 2020

It's tempting to solve texture lifetime by having the WebGL texture act as if it has been destroyed.

However, per the Deleted Object Behavior conformance test, a deleted texture can still be usable in a framebuffer even if the texture has been deleted. So if the underlying XR implementation recycled the texture for another purpose, we have to ensure the recycled (non-portable) contents do not appear when the framebuffer it was attached to is used outside of the XR rAF.

One possible solution is to spec that WebGL textures created by the XR runtime act as if they've been cleared to black when used outside of the XR rAF. This is similar to how WebGL framebuffer contents behave after invalidation behaves. Of course, it would not be acceptable for the implementation to ignore the invalidation call in the XR case.

I agree with @toji that WebGPU will handle texture destruction more elegantly. In WebGPU, destroying a texture will prevent you from submitting new work to the GPU which uses the texture.

@Artyom17
Copy link
Contributor

For Oculus implementation, the swapchain textures are not destroyed / reallocated every frame, they are reused. Rendering occurs directly into the swapchain textures and these textures are used by the VR compositor, i.e. I can't invalidate or clear them to black at all.
So, at the end of rAF WebGL texture object can be marked as 'deleted' and the underlying texture can be unbound from all the framebuffers. This way no rendering can occur outside of the rAF. But the actual physical texture itself cannot be altered in any way.

@cabanier
Copy link
Member

cabanier commented Apr 30, 2020

FYI the spec currently defines that the same textures are returned for each rAF call.

@Artyom17
Copy link
Contributor

Well, it is also wrong, because a different texture may be returned for each rAF and for each view.

@cabanier
Copy link
Member

cabanier commented Apr 30, 2020

Well, it is also wrong, because a different texture may be returned for each rAF and for each view.

Each view might have its own texture but it would still be the same in each rAF call.
We should be careful about creating objects for each frame and each layer as noted by @asajeffrey in immersive-web/webxr#1010

@Artyom17
Copy link
Contributor

I think it could be left to implementation. You don't have to re-create WebGLTextures each frame, but an implementation may want to rotate some pre-allocated WebGLTexture objects. Or, may rotate the underlying GL textures inside the single WebGLTexture object. Not sure we should limit implementers to this way or another.

@cabanier
Copy link
Member

Not sure we should limit implementers to this way or another.

Unless someone complains, I will leave the spec as-is :-)

@cabanier
Copy link
Member

So, at the end of rAF WebGL texture object can be marked as 'deleted' and the underlying texture can be unbound from all the framebuffers. This way no rendering can occur outside of the rAF. But the actual physical texture itself cannot be altered in any way.

@RafaelCintron, would this suggestion from @Artyom17 work?

One possible solution is to spec that WebGL textures created by the XR runtime act as if they've been cleared to black when used outside of the XR rAF.

Would this stop authors from writing to this texture? We want to avoid changing the texture after the rAF as that will change the pixels that are drawn by the compositor.

@RafaelCintron
Copy link

[RafaelCintron] One possible solution is to spec that WebGL textures created by the XR runtime act as if they've been cleared to black when used outside of the XR rAF.

[cbanier] Would this stop authors from writing to this texture? We want to avoid changing the texture after the rAF as that will change the pixels that are drawn by the compositor.

Good point. Opaque framebuffer did this by forcing itself to be incomplete outside of the rAF, thus preventing you from rendering to it.

My understanding is that WebGL implementations (at least in Chromium) only keep references to objects, such as framebuffers, that are bound to the WebGL context. They do not keep references to unbound framebuffers who happen to have their attachments deleted out from under them. This is why you can still "use" (render to) a texture that has been deleted while it is attached to an unbound framebuffer.

To solve our problem, WebGL will have to break this behavior (at least for WebXR textures) and forcibly detach deleted textures from framebuffers when those framebuffers become re-bound to the WebGL context.

@Artyom17
Copy link
Contributor

Artyom17 commented May 1, 2020

@RafaelCintron Yes, I think, you are right. The WebXR's WebGLTexture may have a list of all framebuffers it is attached to and forcefully detach it at the end of the rAF. Do you think, it should be discussed with WebGL WG? Or we can just do it under-the-hood ?

@RafaelCintron
Copy link

RafaelCintron commented May 6, 2020

@Artyom17 , from looking at the code in Chromium, WebGLTexture (via its base class) only knows how many objects it is attached to, not what they are.

Doing things "under the hood" requires at least the code review of respective WebGL owners. Might as well add it to the working group's agenda and get their take ahead of time. What could go wrong? :-)

@kenrussell, @jdashg and @grorg.

@kdashg
Copy link

kdashg commented May 6, 2020

Instead of detaching from framebuffers, just mark the resource as (temporarily?) unrenderable. We don't want to invalidate FB completeness if we can help it, so strong vote against actually detaching resources from framebuffers.

@kdashg
Copy link

kdashg commented May 6, 2020

Marking the texture as deleted in WebGL but not deleting it in GL (because the texture may have been provided by the device) means introducing yet another place where WebGL state and GL state disagree.

This is already true IIRC, and TBH it's fine. deleteTexture really means "invalidate this handle to the underlying texture resource, and unbind any references, excluding containers that aren't currenly bound". The texture may still be in use on a "background" (unbound) framebuffer object, though you will no longer be able to rebind the texture for sampling.

Firefox (Gecko) defers the call to glDeleteTexture until the texture is actually unused: https://searchfox.org/mozilla-central/rev/dc4560dcaafd79375b9411fdbbaaebb0a59a93ac/dom/canvas/WebGLTexture.cpp#89

@RafaelCintron
Copy link

Instead of detaching from framebuffers, just mark the resource as (temporarily?) unrenderable. We don't want to invalidate FB completeness if we can help it, so strong vote against actually detaching resources from framebuffers.

@jdashg, if we mark textures as unrenderable, does that make their corresponding framebuffers be incomplete? If so, I presume this would invalidate FB completeness.

Your "temporarily?" comment is relevant. As currently speced, you get back a new XRWebGLSubImage instance (and thus, a new WebGLTexture) on every call to getSubImage.

partial interface XRWebGLBinding {
  XRWebGLSubImage getSubImage(XRCompositionLayer layer, XRFrame frame);
  XRWebGLSubImage getViewSubImage(XRCompositionLayer layer, XRView view);
};

interface XRWebGLSubImage : XRSubImage {
  [SameObject] readonly attribute WebGLTexture colorTexture;
  [SameObject] readonly attribute WebGLTexture? depthStencilTexture;
  readonly attribute unsigned long? imageIndex;
};

Thus, developers will end up binding new WebGL textures to framebuffers on ever frame anyways.

@cabanier
Copy link
Member

cabanier commented May 7, 2020

Your "temporarily?" comment is relevant. As currently speced, you get back a new XRWebGLSubImage instance (and thus, a new WebGLTexture) on every call to getSubImage.

Thus, developers will end up binding new WebGL textures to framebuffers on ever frame anyways.

No, as currently defined (and in our implementation), you will get back the same WebGLTexture with each layer/view combination.

@kenrussell
Copy link

Coming in cold here; unfamiliar with the WebXR spec.

Does this mean that correctly written WebXR applications should bind the textures in the XRWebGLSubImage to a framebuffer object every frame, but applications that don't do that - and reuse their previously set up framebuffer object from frame to frame - will happen to work on Oculus' implementation?

This sounds like behavior that should be tightly defined in the WebXR spec, and be covered by rigorous conformance tests.

Coming back to the question of WebGLTexture objects, I think it would be fine to specify that the WebGLTextures are invalidated in some way outside the rendering callback, and that the same WebGLTexture object is resuscitated during the next rendering callback. It would be good to discuss this with the rest of the WebGL WG, or at least put together a concrete writeup that can be commented on.

@cabanier
Copy link
Member

cabanier commented May 7, 2020

Does this mean that correctly written WebXR applications should bind the textures in the XRWebGLSubImage to a framebuffer object every frame, but applications that don't do that - and reuse their previously set up framebuffer object from frame to frame - will happen to work on Oculus' implementation?

The current thinking is that we need to unbind all the framebuffer objects at the end of the frame because the state of the textures is undefined if you're not in a rAF call.
We definitely don't want code that behaves differently on different platforms.

This sounds like behavior that should be tightly defined in the WebXR spec, and be covered by rigorous conformance tests.

I agree. This is why we're having this conversation. :-)

Coming back to the question of WebGLTexture objects, I think it would be fine to specify that the WebGLTextures are invalidated in some way outside the rendering callback, and that the same WebGLTexture object is resuscitated during the next rendering callback.

Yes, that is the part we're trying to define.

@cabanier
Copy link
Member

@kenrussell should we take this up with the WebGL group?
If so, how should we proceed?

@RafaelCintron
Copy link

To that end, I prefer (3), where a UA MAY return a new WebGLTexture, or a previous one. This best surfaces what we're actually dealing with under the hood, and lets devs think clearer and make better choices. It also saves some complexity-budget in our implementations.

One drawback of (3) is potential interop issues between browsers. Suppose one UA reuses the same WebGL texture every frame (like Oculus browser does) while another one returns new WebGL textures per frame. If developers only code and test on the former browser, their site will be broken on a browser which implements the latter behavior.

This scenario played out with legacy Edge and the gamepad API. Since Edge returned new gamepad objects with copies of data, sites were broken because people assumed gamepad objects were "live" like they were in Chrome. In the end, we ended up matching Chrome's behavior.

We also need to decide the behavior of XR WebGL textures when they're used outside of XR rAF calls. If they're backed by swapchain textures returned by the XR runtime, there will be correctness issues if developers write to them outside of the XR rAF, or write to the wrong one even inside of the XR rAF. Making textures be non-renderable by WebGL would eliminate this portability issue.

@kdashg
Copy link

kdashg commented May 15, 2020

@jdashg I'm happy to change the spec to option 3) but I'll hold off until the WebGL WG agrees.

Do you know how we can get this on the agenda?

Contact your WebGL rep! I would add it for you, but you'll want to chat it over with them outside this issue page regardless.

@kdashg
Copy link

kdashg commented May 15, 2020

To that end, I prefer (3), where a UA MAY return a new WebGLTexture, or a previous one. This best surfaces what we're actually dealing with under the hood, and lets devs think clearer and make better choices. It also saves some complexity-budget in our implementations.

One drawback of (3) is potential interop issues between browsers. Suppose one UA reuses the same WebGL texture every frame (like Oculus browser does) while another one returns new WebGL textures per frame. If developers only code and test on the former browser, their site will be broken on a browser which implements the latter behavior.

This scenario played out with legacy Edge and the gamepad API. Since Edge returned new gamepad objects with copies of data, sites were broken because people assumed gamepad objects were "live" like they were in Chrome. In the end, we ended up matching Chrome's behavior.

These are real concerns, but they have good company: This isn't the only aspect that changes between browsers. I think we should avoid these pitfalls where we can, but that this is a case where the benefits outweigh the risks.

We also need to decide the behavior of XR WebGL textures when they're used outside of XR rAF calls. If they're backed by swapchain textures returned by the XR runtime, there will be correctness issues if developers write to them outside of the XR rAF, or write to the wrong one even inside of the XR rAF. Making textures be non-renderable by WebGL would eliminate this portability issue.

I think temporarily making them non-writable is a great way do deal with this.
Are they readable out-of-raf, though? If they are readable but not writable, we shouldn't reuse "renderable" for this, since that would preclude non-renderable resources from being copied via BlitFramebuffer.

@cabanier
Copy link
Member

This scenario played out with legacy Edge and the gamepad API. Since Edge returned new gamepad objects with copies of data, sites were broken because people assumed gamepad objects were "live" like they were in Chrome. In the end, we ended up matching Chrome's behavior.

These are real concerns, but they have good company: This isn't the only aspect that changes between browsers. I think we should avoid these pitfalls where we can, but that this is a case where the benefits outweigh the risks.

I don't see any benefit here.
We really don't want to create unnecessary texture objects or expose the concept of the underlying swapchain but if this is a hard requirement for Mozilla, I'm strongly inclined to update the spec and our implementation to match them.
However, I'm hopeful that mozilla finds a way to have a single texture per layer (per view) since they were able to do so for WebXR.

We also need to decide the behavior of XR WebGL textures when they're used outside of XR rAF calls. If they're backed by swapchain textures returned by the XR runtime, there will be correctness issues if developers write to them outside of the XR rAF, or write to the wrong one even inside of the XR rAF. Making textures be non-renderable by WebGL would eliminate this portability issue.

I think temporarily making them non-writable is a great way do deal with this.
Are they readable out-of-raf, though? If they are readable but not writable, we shouldn't reuse "renderable" for this, since that would preclude non-renderable resources from being copied via BlitFramebuffer.

No, you can't read from them.
Depending on the backend, that texture might become unreachable from WebGL or it might even be completely gone from the browser process.

This topic has diverged a bit into a discussion on lifetime and the number of textures.
Should I open a new issue so it's less confusing?

@cabanier
Copy link
Member

@jdashg I'm happy to change the spec to option 3) but I'll hold off until the WebGL WG agrees.
Do you know how we can get this on the agenda?

Contact your WebGL rep! I would add it for you, but you'll want to chat it over with them outside this issue page regardless.

I'm trying to figure out who our rep is.
In the mean time, @RafaelCintron would you want to put in on the agenda since we both seem to have the same line of thinking?

@asajeffrey
Copy link

asajeffrey commented May 16, 2020

This requires the WebGL implementation to change pretty drastically, in that WebGLTexture objects would now be mutable.

@asajeffrey
Copy link

At @cabanier's request, moving the discussion in #141 here...

tl;dr: currently the per-frame updating of texture and render state is done implicitly, e.g. https://immersive-web.github.io/layers/#xrwebglsubimagetype says "The colorTexture and depthStencilTexture objects MUST only be used during the XR animation frame of the current session and MUST be made invalid once the XR animation frame completes."

We could use fame updates (https://immersive-web.github.io/webxr/#frame-update) to make this explicit.

@cabanier
Copy link
Member

We could use fame updates (https://immersive-web.github.io/webxr/#frame-update) to make this explicit.

Is the idea that we would schedule an update for each call to getViewSubImage and getSubImage that would change the state of the WebGLTextures at the end of the rAF call?

@asajeffrey
Copy link

We can use frame updates to update the render state (e.g. which textures are in each subimage) at the beginning of the rAF. Slightly annoyingly, there isn't a matching hook at the end of the rAF, but if there was one we could use it to invalidate/delete/whatever the textures.

@cabanier
Copy link
Member

Maybe we can add a callback to end if the current list of animation frame callbacks.
I believe there is no way that other code can add something to the current list.

@asajeffrey
Copy link

That might work, but seems a bit brittle - we'd have to get the timing just right, after any rAF callbacks got added by user code, but before the list of rAF callbacks is read in https://immersive-web.github.io/webxr/#xr-animation-frame.

@cabanier
Copy link
Member

Since we add the callback at getViewSubImage and getSubImage and nothing else touches that list, the timing should be right.
I don't see https://immersive-web.github.io/webxr/#xr-animation-frame making copies of the list or disallowing that changes are made to it.

@kenrussell
Copy link

@cabanier @RafaelCintron @jdashg let's try to discuss this at the upcoming WebGL working group meeting this Thursday May 21. I'll contact you all via email to try to get this set up.

@kenrussell
Copy link

@asajeffrey @jdashg one thought - some additional state could be added to the WebGLTexture objects themselves in Mozilla's implementation so that the WebXR impl could mark them as temporarily invalid, or valid again. Hopefully, without adding too much overhead, some additional WebGL-level validation could then be done for WebGLFramebuffer objects that attach those textures, and draw calls could be rejected to those framebuffers while the textures are invalid. This could certainly impact draw-call performance even further, so would have to be carefully tested before committing to a WebXR spec change in this direction. For full correctness it would have to also handle the case where the user attempts to sample one of these WebGLTextures.

@RafaelCintron
Copy link

[@jdashg wrote] If they are readable but not writable, we shouldn't reuse "renderable" for this, since that would preclude non-renderable resources from being copied via BlitFramebuffer.

Good point, Jeff. The texture must not be able to be rendered to, sampled, blitted or readpixeled from. It's untouchable! :-)

@cabanier
Copy link
Member

@jdashg thanks for the great conversation at today's WebGL meeting.
It sounded we were going back and forth a bit but I think we concluded that you were going to investigate how much work it is to have a single WebGLTexture that switches its backend and that we'd circle back in the next meeting.

Is this your recollection as well?

@kenrussell
Copy link

Here are some conclusions from today's WebGL working group meeting. Please correct me if there are errors.

  • No firm agreement on whether to vend multiple WebGLTextures from WebXR, or magically swap out the texture object inside a single WebGLTexture.
  • Manually marking an individual WebGLTexture as incomplete / complete again is easy in Firefox, and not too difficult in ANGLE, so might be tractable in Chromium.
  • However, it would likely be easier in all implementations if we started by forbidding bindTexture with these WebGLTextures coming from WebXR.

@cabanier
Copy link
Member

#174 will address the concerns raised in this issues.
Closing. If people have concerns, please reopen.

@asajeffrey
Copy link

I'd like to reopen this issue until we have agreement from the WebGL WG about using the same WebGL texture object with different GL textures backing it.

@asajeffrey asajeffrey reopened this Jul 31, 2020
@cabanier
Copy link
Member

This was discussed in the WebGL face-to-face a while ago and it was agreed that it was ok to re-use the same texture but swap out the backing.

@asajeffrey
Copy link

@cabanier ah, this has been discussed since #101 (comment) ? Are there WebGL WG meeting notes we can point to?

@cabanier
Copy link
Member

cabanier commented Jul 31, 2020

I have the google doc with the meeting minutes but it's only shared with the WebGL members.
@kenrussell are the notes public?

@kenrussell
Copy link

Sorry, the WebGL working group notes are only accessible to Khronos group members.

@asajeffrey
Copy link

Okay, I'll talk to our folks on the WebGL WG.

@asajeffrey
Copy link

OK, I read over the WebGL meeting notes, it sounds like the plan is to recycle the WebGL texture objects, and only build a new one if the texture is resized. We can close this issue again, and discuss how textures become invalid and valid over in #174.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

7 participants