-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Development tools #8
base: rt-xr/development
Are you sure you want to change the base?
Conversation
…to facilitate interactivity implementation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you for sharing your progress on the tools you are developing. As discussed during last week's call, you consider this PR to be a draft. What exactly do you plan to add to it in order to have these utility classes integrated ?
I understand the code sample you provide is designed to work witgh a specific scene document, combining interactivity and anchoring. It has hardcoded m_AnchoredGameObject
- the object that is signaled to be manipulated in the scene - by getting it from an hardcoded index in the VirtualSceneGraph.
Shouldn't the target for 'action manipulate' be retrieved from the action's definition as signaled in the scene instead - here ?
The MPEG_ActionManipulateEvent
holds a reference to a UnityEngine.InputSystem.InputAction
, it would also make sens for it to hold reference to objects that need to be manipulated, or maybe in your framework, the node indices that can be used to retrieve the gameObject from VirtualSceneGraph
. Then the application can manipulate these objects, eventualy taking into account constraints such as anchoring.
@@ -114,7 +124,6 @@ public static string GetBindingFromUserInputDescription(string description) | |||
case "touchscreen": | |||
builder.Append("<Touchscreen>"); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is this covered by a specification ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No, this is to work with Unity input path system, and it need to be updated to support an openXR path
namespace GLTFast { | ||
public interface MPEG_ActionEvent { } | ||
|
||
public struct MPEG_ActionManipulateEvent: MPEG_ActionEvent |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Shouldn't the action hold a reference to the nodes that need to be manipulated as specified in the glltf document ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is a good idea, this PR provides a base/draft implementation to help for the IBC demo, any improvement is welcome
Hello, Thank you for your review. To answer your questions:
I am planning to add more fields in the events structures, such as game object references, and more handy tools.
Yes it could, and this may be more handy, but for the IBC demo it should be fine |
I have created a tool to simplify the development between a glTF file and the unity application.
This tool is still under development but first target the IBC demo deadline (mid september), and has vocation to be merged into the core rt-xr/glTFast repository once done. It resolves the issue here (5G-MAG/rt-xr-unity-player#41)
In order to test this new tool, here is an application code example to use :
This simple script allow to place a sofa model on the detected floor (that will appear in a transparent yellow color) with the help of a mobile device (tested on an android tablet).
Please first:
StartApplication
method is called once the glTF file has loaded completely (after gltfsceneViewer.Load)