-
Notifications
You must be signed in to change notification settings - Fork 13
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Explore using meshes for all RWG #6
Comments
From #2, Blair says: I'm less keen on having the planes be a field in returned value; this implies that any kind of info will be in it's own area. Why aren't planes a kind of mesh? If all things that are meshs are meshes, then an app that cares only about meshes (for occlusion, for example, or drawing pretty effects on anything known in the world) can just deal with that.
|
From #2, bialpio says: Not all things that are related to world knowledge are meshes (although in this repo we’ll focus on RWG). Meshes are definitely less redundant (when they are available) but we feel they need to be separate detectable features to allow flexibility in the future. There is also a question of confidence differences between different types of real-world data. For example ARKit and ARCore may have different confidence requirements for returning a mesh or a plane. If we say we can represent a plane as mesh that is kind of true but not really since an accurate reflection of the data returned by ArKit/ArCore would require attaching metadata to the mesh for things like plane center/ extents or even surface normal. Which basically puts us kind of back in the situation of having multiple types of data. |
From #2, Blair says:
That's orthogonal to my point. Obviously there will be world knowledge that has nothing to do with meshes. But the idea of having a "planes" member, with a convex boundary, is too specific. In this case, for example, I would posit it will be obsolete before this proposal is finished -- it's pretty easy to imagine that some time soon planes (in ARKit and/or ARCore) will support arbitrary concave boundaries, with holes in them. The current planes are woefully inadequate. My suggestion to have (when the underlying object is compatible, obviously) a generic mesh type, which has a relatively simple representation, but can have additional fields based on what it actually represents, has all the advantages of having a specific "plane" field, with a bunch of other advantages. As we move toward additional capabilities (e.g., world meshes like on Hololens/ML1), they can be represented similarly; future things (like segmenting moving objects out of a static world) will also fit. But for applications that DON'T know (or care) about these new types, they can fall back to just using the mesh, if they want.
You are right we need additional metadata (that's what I proposed, right?), but adding it absolutely does not put us back in the same place. If I don't know what a "segmentedObject" or (in this case) "plane" or some future "concavePlane" is, but they are all "mesh" with the required "mesh" data provided, I can use them. I'm very concerned about evolution and headway over time. Having simple convex planes with a simplistic geometric boundary as a base data type doesn't really work. |
Here's the post in #1 that kicked off my thinking on this, I'm including it because it summarizes a few things that are only "implied" above: A simple solution to world geometry is pick a lowest-common representation, like "a forest of meshes". But, then ARKit/ARCore planes become meshes, and lose semantics: we no longer know that they are meant to correspond to vertical or horizontal planes. Similarly, faces could be exposed as a mesh that comes and goes, as could moving objects or detected images, etc. So, a slightly less "wasteful" approach might be to say
|
I don’t think that we are proposing planes as a base data type. My approach here is that I don’t think we need to have a basic data type - we can return different kinds of objects & just specify that each of them has a mesh (contrast with “each of them is a mesh” as with inheritance). This also allows us to enable feature detection based on the object type (detectedPlanes array is undefined => device doesn’t support plane detection). Additionally, to me the following 2 code snippets are almost equivalent (I prefer the 2nd one as it doesn't require an app to do something akin to meshes.forEach(mesh => {
let pose = mesh.getPose(xrReferenceSpace); // origin of mesh or plane
if (mesh.isPlane) {
// mesh.polygon is an array of vertices defining the boundary of 2D polygon
// containing x,y,z coordinates
let planeVertices = mesh.polygon;
// ...draw plane_vertices relative to pose...
} else {
// draw more general mesh that isn't a special kind of mesh
// the plane mesh would also have
let vertices = mesh.vertices // vertices for the mesh. For planes, might be the same
// as mesh.polygon plus one at the origin
let triangles = mesh.triangles // the triangles using the vertices
// ... draw mesh relative to pose ...
}
}); Compare with: let planes = xrFrame.worldInformation.detectedPlanes;
let meshes = xrFrame.worldInformation.detectedMeshes;
// let objects = xrFrame.worldInormation.detectedObjects; // planes + meshes + all other future types
// not sure if it’s worth adding it if it’s simply a concatenation of all the arrays
planes.forEach(plane => {
let pose = plane.getPose(xrReferenceSpace); // origin of plane
// mesh.polygon is an array of vertices defining the boundary of 2D polygon
// containing x,y,z coordinates
let planeVertices = plane.polygon;
// ...draw planeVertices relative to pose...
});
meshes.forEach(mesh => {
// draw more general mesh that isn't a special kind of mesh
let vertices = mesh.vertices // vertices for the mesh. For planes, might be the same
// as mesh.polygon plus one at the origin
let triangles = mesh.triangles // the triangles using the vertices
// ... draw mesh relative to pose ...
}); |
That code seems (a) overly complex and (b) subject to failure in the future. In this design, if a web page wants to render all the geometry it gets as a pretty world mesh (thinks of the mesh rendering you see when you tap on the world with hololens), it has to know about all of the various fields. If a new type of object is added ("movingObjects"), any old web pages will fail to render them because they don't have a case handler for them. I took inspiration from https://docs.microsoft.com/en-us/uwp/api/windows.perception.spatial.surfaces.spatialsurfacemesh on what kind of API to expose for meshes. I just implemented my version, here is what the code looks like. I'm including it all below, since it's a complete example that renders planes in one color, and faces in another (I currently expose ARKit faces as a kind of mesh, just so I have two kinds). The only time I look at the type of the geometry is to decide on the color. Otherwise, both expose a common "mesh" api (vertices, triangles and optional normals and texture coordinates). In my current implementation, I expose the normals on the plane (by assigning the plane normal to all vertex normals), just to test, and don't yet compute the normals for the face mesh. Since this is expensive, I'll plan on adding a property to the request:
|
I'd like to propose another way of looking at this question. Rather than thinking about what best represents the underlying system's mapping of the environment, we could consider what is the thing which many applications seem to want. That is, I have an object I'd like to place on a flat surface so what I want is for the system (regardless of its underlying representations) to give me the best planar surface it can since that's what my app understands. The app's request for Planes is telling the system to optimize its efforts into providing this for the App. Other use cases might need more detailed meshes and apps could also ask for those. Maybe not all devices could fulfill that? but all the devices we have currently could I think provide planes much in the same way they can all resolve a hit test. If we have an option for meshes it doesn't have to mean that you need to check the meshes and the planes. It could mean at that point that all planes will also show up as meshes. Its more a question of the kind of query you are making of the environment. |
@bricetebbs I like what you are suggesting. Essentially, allow multiple kinds of "geometry-like" things to be requested, and provide the ones you can as appropriate. If the user wants occlusion, and they want "tables" (planes above the ground) and they want "an indication of where the ground is" and so on, we can focus on just giving them those things, or indicating we can't (and letting them polyfill if appropriate). That really gels with my current thinking, too, after I've spent the last month building an authoring tool for AR on top of WebXR. Part of where I was starting above is "we need a common, lowest common denominator" that developers can rely on. But we definitely want the option for UAs to provide (and, perhaps eventually, the standard to require) more semantically meaningful representations that developers expect. Planes are an obvious one; when I was at MSR (Microsoft Research) for a summer working on RoomAlive (projection-based AR), there were folks doing excellent work on extracting planes and other semantically meaningfull structures from depth data, and it's super useful. The other obvious one is "ground" ... it would be great if UAs could provide (assuming permission and capabilities) an estimate of the height above ground (below the device) and (in the case of planes and meshes) tag one or more with an indication they are "the ground or floor". |
I agree. Processing a mesh and asking for plane data are very different. If people want access to an occlusion mesh or if they want to detect a plane, we need to provide them with separate APIs.
Ideally, the author should be able to request other types of planes. (ie Walls, ceilings, general surfaces, ec) |
Discussion about data types relating to real-world geometry and whether a mesh should be used.
The text was updated successfully, but these errors were encountered: