Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

projection/view matrix use requirements #159

Closed
andreasplesch opened this issue Dec 3, 2016 · 5 comments
Closed

projection/view matrix use requirements #159

andreasplesch opened this issue Dec 3, 2016 · 5 comments

Comments

@andreasplesch
Copy link

https://w3c.github.io/webvr/#dom-vrframedata-leftprojectionmatrix

There seem to be a number of assumptions being made when the spec. language strongly recommends use and unchanged use of the VR display provided projection and view matrices.

I think one assumption is that the vertex positions to be viewed and projected are given in the same units (say meters) as the position of the VR display as it is reported. While this assumption may be self-evident for some, it is not for all. The value of a spec. is largely in removing any ambiguity.

Asked another way, should the reported matrices still be used if the virtual reality is situated within a biological cell measured and reported in meters ? Probably not. The expectation probably is that the 3d content is scaled up/down to some human scale (the meter, or the foot). Is there such an expectation ? If there is, does it affect anything other than (avoiding) use of the recommended projection/view matrices ?

Another assumption which I am less sure of seems to be that vertex coordinates (after being transformed to a world space) should be relative to the same origin as the position of the VR display as reported. I think this may be addressed by #149.

There may be other assumptions which involve how the real world in which the VR display lives relates to the virtual world of the provided content

While all this perhaps sounds quite abstract, I do think it is important to very explicit, almost mathematically rigorous, in a successful specification of what is to be widely available and used functionality.

@ssylvan
Copy link

ssylvan commented Dec 3, 2016

I guess we specify that units are meters in some places, but not everywhere. Perhaps it's worth making that clear at some top-level part of the spec?

If you are viewing a cell in VR the way to do that is indeed to blow up the cell so that its size is on the order of meters (making it possible to walk around inside it). You should be using the reported matrices in this case yes (scale up the objects to the right size, don't mess with the view/projection).

There may be obscure corner cases that I can't think of, but generally speaking never mess with the view and projection matrices in VR, since that can cause comfort issues.

@andreasplesch
Copy link
Author

andreasplesch commented Dec 4, 2016

Thanks for confirming that the provided matrices assume that the virtual world should literally correspond to the room in which the experience takes place.

The other option would be to introduce a scale factor which can be used to shrink IPD and reported positions to be compatible with content coordinates. I believe VREffect has such a scale factor.

@ssylvan
Copy link

ssylvan commented Dec 4, 2016

One important thing to emphasize is that the view and projection matrices are related to real physical properties of the HMD. E.g. they will depend on the screen size, the lens, the distance, and so on. Some parts even depend on the physical location of the user's eyes. Many devices have very carefully calibrated matrices to make sure things look right and minimize comfort issues. For that reason any messing with it is risky. The IPD is an implicit part of this (it's really the difference between the translation part of the two view matrices).

The way to introduce scaling behavior is to do it in the "object to world" matrix (the model matrix in OpenGL parlance). This logic should be entirely on the application-side of things. So don't try to scale the view and projection matrices, throw a scaling factor on the objects instead. That seems like the easiest way to make sure your final model-view-projection matrices have the correct behavior (i.e. it's harder to mess things up if you just stay away from modifying the view and projection matrices at all).

It's possible to provide some helpers to do this in the WebVR API, but it seems like it can live quite naturally in middleware (like game engines) or just in the application itself too.

@andreasplesch
Copy link
Author

andreasplesch commented Dec 4, 2016

So I would just suggest to mention in the spec. that the matrices make the assumption that the content is viewed as a real human would, which may require that the content is appropriately scaled. For example, if an experience involves pretending to be a mouse it may be necessary to upscale all by a factor of 10.

Of course, there is then the question how to deal with an experience which involves multiple scales but this should not be the specs. concern.

@toji
Copy link
Member

toji commented May 16, 2018

This issue was moved to immersive-web/webvr#14

@toji toji closed this as completed May 16, 2018
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants