-
Notifications
You must be signed in to change notification settings - Fork 387
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Clarify renderWidth/Height: recommended size or 1:1? #125
Comments
I've been pretty happy so far with modifying the resolution on the fly to keep the frame rate up, so I'd recommend sticking with the greater of the two values. You can always step it down. Also, I correct me if I'm wrong here but the fill rate could be widely variable depending on the complexity of the fragment shader(s) and the amount of overdraw. So I'm not convinced the recommended value is useful. Even if you do use a fixed resolution, you're still gonna have to benchmark on a few devices. |
Glad to hear that dynamic resolution is working well for you! My concern is that a lot of naive applications will just allocate a buffer at the size that we report and be done with it, even if it's not what they really want from a performance perspective. Then again maybe we should just build dynamic resolution adaptation into Three.js and that'll solve it for 90% of devs. ;) Regardless of what we chose, the spec text should be more explicit about what the value represents. |
Ideally every app would likely do some level (basic or advanced) of dynamic scaling and that is true also of non-VR WebGL apps where currently not usually employed. I don't see a way around besides app/framework handling in some fashion. I don't know if we need two different values sets though.. "good balance of quality and performance for most applications" is going to be pretty arbitrary and likely on the conservative side well below 1:1 i.e. Oculus Mobile SDK recommendation of 1024x1024 even for native (with specific apps like photo app using larger (or using OVR Layers)) for all supported phones (all within a certain ~small factor of perf, same screen resolution [2560x1440], which won't be true of what WebVR content running on) but I don't know how they will change with increasing resolutions, more GPU power, .. Native is going to face the same problem, I wonder their thoughts on. Do you know any thoughts from Google VR SDK native side? From cursory (could be wrong) looking it looks like someone moving from a Pixel (1080p) to a next-gen 4K @ 120Hz screen Daydream phone going to suddenly need 4x the perf as sizes based on screen size? (Native) App developer would have to handle dynamic in this case if didn't want 1/8 framerate. Developers responsibility. Maybe just go with something like 1:1 and call recommendedRenderWidth/Height? +1 on doing lots of things the best way in three.js, best practices, etc.. |
Revisiting this, since I've now seen it be a sticking point for several different WebVR sites and browser implementations. Some points worth noting:
Given all of the above, I've got a new proposal for how to handle communicating HMD resolution. For VRLayers, add a resolution scale value, like so:
The default of The resolution itself could be surfaced in one of two ways: Either via the requestPresent promise resolve or simply by setting the canvas size to the requested resolution directly. Passing the resolution to the promise resolve is pretty simple from an API point of view, but a bit of a pain for the developer. Consider that we'll want to eventually support N layers, each of which may want it's own scalar, so we can't just pass back a single width/height. Realistically the API would look something like this:
Used like so:
Which, admittedly, feels a little silly. But it's scalable! In that light, setting the canvas size directly is appealing superficially because it magically makes sure that the right thing happens in almost all cases. It's also a bit distressing because it "magically" makes sure that the right thing happens, and nothing is more frustrating to a developer than when the magic fails. It also comes with a bunch of behavioral questions: Do we restore the old size after presentation ends? Should we allow manual resizing during presentation? If we extend layers to include non-canvas types what's the expected behavior? I'm also not sure if there's a precedent for an API like this so directly manipulating element dimensions, so maybe it would get some pushback by virtue of simply being weird. In either case, though, I feel like this fulfills several goals by not surfacing the value till it's needed, (fingerprinting entropy--, BTW!), giving the user control over quality while making the default case sensible, and ensuring that any resizing that does happen MUST happen at the right time. Thoughts? |
Closing since this is planned to be addressed in 2.0. If anyone has issues with the 2.0 proposal, please file a new issue. For details see: https://github.com/w3c/webvr/blob/master/explainer.md#high-quality-rendering |
On some devices, especially mobile, there are two potentially width and height values for the render targets we could report. One is the size required for the pixels at the center of the users vision to be 1:1 with the physical screen pixels post-distortion. The other is the size of the buffer that the platform feels provides the best balance of quality/performance. For example: on GearVR the reported value is 1024x1024, even though the screen could benefit from higher. It's explained in their documentation that this value will hit a sweet spot for most apps, though something like a photo viewer may want to use a larger target.
Which of these values should WebVR report? Or should we report both?
The text was updated successfully, but these errors were encountered: