Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Web support #13

Open
philpax opened this issue Mar 3, 2023 · 6 comments
Open

Web support #13

philpax opened this issue Mar 3, 2023 · 6 comments

Comments

@philpax
Copy link
Owner

philpax commented Mar 3, 2023

This is strictly not possible right now, as the official linkage between WebGPU and WebXR has not landed. However, it may still be possible by copying the output from WebGPU to WebGL, and then from WebGL to WebXR. Requires more investigation.

@philpax
Copy link
Owner Author

philpax commented Jun 29, 2023

I've opened an issue on the immersive-web proposal regarding its current status: immersive-web/WebXR-WebGPU-Binding#5

@rcelyte
Copy link

rcelyte commented Jul 29, 2024

It actually is possible to use WebGPU with WebXR right now, using one blit from canvas->WebXR: Demo Source

@philpax
Copy link
Owner Author

philpax commented Jul 30, 2024

Oh, fascinating, nice work!

@Ramith-D-Rodrigo
Copy link

It actually is possible to use WebGPU with WebXR right now, using one blit from canvas->WebXR: Demo Source

Hi, I'm also interested in creating WebXR (Specifically AR) using WebGPU, but still a rookie to this domain. Can you briefly give an overview on how did you use WebGPU with WebXR API in the demo source? What are the roles of WebGPU and WebGL? I see that both have been used in the demo.

In any case I also have my own observation on the concept here. Correct me if I'm wrong, Basically, the rendering is done by WebGPU after getting the pose information from WebGL. Doesn't that mean GPU computation with respect to XR operations is done by WebGL?

Also a humble suggestion, if possible, please add comments to the source. It would be really helpful in understanding the approach you have employed.

Thanks!

@rcelyte
Copy link

rcelyte commented Aug 7, 2024

WebGL is used strictly to forward rendered frames from WebGPU to WebXR, that's all. The core mechanism making this possible is API support for importing an HTMLCanvasElement as a blittable texture in WebGL via WebGLRenderingContext.texImage2D().

getting the pose information from WebGL

WebGL isn't involved in anything besides textures, the rest of WebXR is CPU-side. Start at the documentation for XRFrame.getViewerPose() to learn how poses in WebXR are handled.

still a rookie to this domain

I would recommend MDN's guides as a starting point for learning how WebXR works and the roles of each API.
As for learning WebGPU, I can't really give advice on that front as I came from a background in native graphics (GL[ES], D3D11, Vulkan, Metal) and was able to apply that existing knowledge reading through sample code to understand how the concepts mapped.

@Ramith-D-Rodrigo
Copy link

Oh wow, thanks for the quick reply! I understand now. I'm currently learning WebGPU using the dawn implementation since I love C++.

WebGL isn't involved in anything besides textures, the rest of WebXR is CPU-side.

So you are referring to what is mentioned in the spec? If so, my bad. I should've read it more clearly.

I wanted to clarify the abstraction between WebXR and the rendering APIs despite WebXR having some WebGL based interfaces (Ex: XRWebGLBinding and XRWebGLLayer). Now I think they are cleared. Also, thanks for the guidance. Much appreciated.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants