Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Raycasting onto the Depth map #16

Closed
dogadogan opened this issue Dec 11, 2023 · 8 comments
Closed

Raycasting onto the Depth map #16

dogadogan opened this issue Dec 11, 2023 · 8 comments

Comments

@dogadogan
Copy link

dogadogan commented Dec 11, 2023

Firstly, thank you so much for this great repo.

I was wondering how I can raycast toward the depth map generated by Depth API - so I can make use of the hit 3D point. Is there a specific method/script for doing this?

This page mentions "Raycasting: using rays to point at physical surfaces. Supports use cases like content placement" - but I couldn't find information on how to do this in the context of Depth API.

Thank you!

EDIT (for visibility):

Starting with v71, we've released our official solution for depth raycasting and is part of MRUK. You can find more information on it here. There's also a sample within MRUK that showcases its usage here.

We strongly recommend using the official solution over the solutions mentioned in this thread moving forward.

@TudorJude
Copy link
Contributor

Hey dogadogan,

We will add that feature sometime in the near future.

@dogadogan
Copy link
Author

Hi @TudorJude, thank you so much for your reply!

I'm a bit confused, the Meta Depth API for Unity website says this is already available:

Screen Shot 2023-12-12 at 3 26 42 PM

And I have definitely already seen some demos utilizing this. Would you be able to point me to the right API documentation?

If this has not been implemented yet, might you be able to share an estimated time for when this will be available?

Thank you so much for your time and help! :)

@vasylbo
Copy link
Contributor

vasylbo commented Dec 13, 2023

Hi @dogadogan,
The Depth API itself allows this even now, as it's a very low level access, we just don't have any sample code for doing that. By saying it allows, I mean that the RenderTexture is available to application and things in the screenshot can be implemented now.
We don't have a sample scene for it yet. We can share an example code directly here if you in a rush to get this working. But it's not production ready nor performant.
Raycasting scene is in our backlog, we'll start working on it in the next few weeks.

@dogadogan
Copy link
Author

Yes, it would be great if you can share the example code for raycasting here!

Thank you so much in advance!

@TudorJude
Copy link
Contributor

Hey dogadogan,

Here's the solution, but as @vasylbo mentioned, this is not production ready nor is it very performant but you can can use it to play around with.

Firstly you will need to create a new compute shader that computes the raycasts. Here's the code for it :

#pragma kernel CSMain

struct RaycastResult {
  float3 Result[2];
};

StructuredBuffer<float2> RaycastRequests;
RWStructuredBuffer<RaycastResult> RaycastResults;

Texture2DArray<float> _EnvironmentDepthTexture;

float4x4 _EnvironmentDepthReprojectionMatrices[2];
float4 _EnvironmentDepthZBufferParams;
float4 _ZBufferParams;
float4x4 unity_StereoMatrixInvVP[2];

float SampleEnvironmentDepth(const float2 uv, const int slice) {
  const float4 reprojectedUV =
      mul(_EnvironmentDepthReprojectionMatrices[slice], float4(uv.x, uv.y, 0.0, 1.0));
  const uint3 depthtextureuv = uint3(reprojectedUV.x * 2000, reprojectedUV.y * 2000, 0);

  // depth z buffer value
  const float inputDepthEye = _EnvironmentDepthTexture[depthtextureuv];

  const float inputDepthNdc = inputDepthEye * 2.0 - 1.0;
  const float envLinearDepth = (1.0f / (inputDepthNdc + _EnvironmentDepthZBufferParams.y)) * _EnvironmentDepthZBufferParams.x;

  // depth camera z buffer
  float envDepth = (1 - envLinearDepth * _ZBufferParams.w) / (envLinearDepth * _ZBufferParams.z);

  return envDepth;
}

float4 ComputeClipSpacePosition(float2 positionNDC, float deviceDepth)
{
    float4 positionCS = float4(positionNDC * 2.0 - 1.0, deviceDepth, 1.0);

    return positionCS;
}

float3 ComputeWorldSpacePosition(float2 positionNDC, float deviceDepth, float4x4 invViewProjMatrix)
{
    float4 positionCS  = ComputeClipSpacePosition(positionNDC, deviceDepth);
    float4 hpositionWS = mul(invViewProjMatrix, positionCS);
    return hpositionWS.xyz / hpositionWS.w;
}

// https://gist.github.com/bgolus/a07ed65602c009d5e2f753826e8078a0
float3 ComputeWorldSpaceNormal(float2 uv, const int slice) {
  // get current pixel's view space position
  float3 viewSpacePos_c = ComputeWorldSpacePosition(uv, SampleEnvironmentDepth(uv, slice), unity_StereoMatrixInvVP[slice]);

  // TODO: fix hardcoded screen space
  float2 offsetTexSpace = 6.0f / 2000.0f;

  // get view space position at 1 pixel offsets in each major direction
  float2 offsetUV = uv + float2(1.0, 0.0) * offsetTexSpace;
  float3 viewSpacePos_r = ComputeWorldSpacePosition(offsetUV, SampleEnvironmentDepth(offsetUV, slice), unity_StereoMatrixInvVP[slice]);

  offsetUV = uv + float2(0.0, 1.0) * offsetTexSpace;
  float3 viewSpacePos_u = ComputeWorldSpacePosition(offsetUV, SampleEnvironmentDepth(offsetUV, slice), unity_StereoMatrixInvVP[slice]);

  // get the difference between the current and each offset position
  float3 hDeriv = viewSpacePos_r - viewSpacePos_c;
  float3 vDeriv = viewSpacePos_u - viewSpacePos_c;

  // get view space normal from the cross product of the diffs
  float3 viewNormal = normalize(cross(hDeriv, vDeriv));

  return viewNormal;
}

// depending on the use case workgroup amount can be optimized for better performance
[numthreads(1,1,1)]
void CSMain (uint3 id : SV_DispatchThreadID)
{
    const uint slice = 0;

    float2 raycastPosition = RaycastRequests[id.x];

    float envDepth = SampleEnvironmentDepth(raycastPosition, slice);
    float3 worldPos = ComputeWorldSpacePosition(raycastPosition, envDepth, unity_StereoMatrixInvVP[slice]);

    RaycastResults[id.x].Result[0] = float4(worldPos, envDepth);
    RaycastResults[id.x].Result[1] = -ComputeWorldSpaceNormal(raycastPosition, slice);
}

Next, you will need a script to access this data in your C# code, let's call it EnvironmentDepthAccess.cs . In the ComputeShader field drag and drop the literal compute shader file shared above. Add this script somewhere in your scene that has occlusions:

using System.Collections.Generic;
using System.Linq;
using System.Runtime.InteropServices;
using UnityEngine;

    public class EnvironmentDepthAccess : MonoBehaviour
    {
        private static readonly int raycastResultsId = Shader.PropertyToID("RaycastResults");
        private static readonly int raycastRequestsId = Shader.PropertyToID("RaycastRequests");

        [SerializeField] private ComputeShader _computeShader;

        private ComputeBuffer _requestsCB;
        private ComputeBuffer _resultsCB;

        public struct DepthRaycastResult
        {
            public Vector3 Position;
            public Vector3 Normal;
        }

        /**
         * Perform a raycast at multiple view space coordinates and fill the result list.
         * Blocking means that this function will immediately return the result but is performance heavy.
         * List is expected to be the size of the requested coordinates.
         */
        public void RaycastViewSpaceBlocking(List<Vector2> viewSpaceCoords, out List<DepthRaycastResult> result)
        {
            result = DispatchCompute(viewSpaceCoords);
        }

        /**
         * Perform a raycast at a view space coordinate and return the result.
         * Blocking means that this function will immediately return the result but is performance heavy.
         */
        public DepthRaycastResult RaycastViewSpaceBlocking(Vector2 viewSpaceCoord)
        {
            var depthRaycastResult = DispatchCompute(new List<Vector2>() { viewSpaceCoord });
            return depthRaycastResult[0];
        }


        private List<DepthRaycastResult> DispatchCompute(List<Vector2> requestedPositions)
        {
            UpdateCurrentRenderingState();

            int count = requestedPositions.Count;

            var (requestsCB, resultsCB) = GetComputeBuffers(count);
            requestsCB.SetData(requestedPositions);

            _computeShader.SetBuffer(0, raycastRequestsId, requestsCB);
            _computeShader.SetBuffer(0, raycastResultsId, resultsCB);

            _computeShader.Dispatch(0, count, 1, 1);

            var raycastResults = new DepthRaycastResult[count];
            resultsCB.GetData(raycastResults);

            return raycastResults.ToList();
        }

        (ComputeBuffer, ComputeBuffer) GetComputeBuffers(int size)
        {
            if (_requestsCB != null && _resultsCB != null && _requestsCB.count != size)
            {
                _requestsCB.Release();
                _requestsCB = null;
                _resultsCB.Release();
                _resultsCB = null;
            }

            if (_requestsCB == null || _resultsCB == null)
            {
                _requestsCB = new ComputeBuffer(size, Marshal.SizeOf<Vector2>(), ComputeBufferType.Structured);
                _resultsCB = new ComputeBuffer(size, Marshal.SizeOf<DepthRaycastResult>(),
                    ComputeBufferType.Structured);
            }

            return (_requestsCB, _resultsCB);
        }

        private void UpdateCurrentRenderingState()
        {
            _computeShader.SetTextureFromGlobal(0, EnvironmentDepthTextureProvider.DepthTextureID,
                EnvironmentDepthTextureProvider.DepthTextureID);
            _computeShader.SetMatrixArray(EnvironmentDepthTextureProvider.ReprojectionMatricesID,
                Shader.GetGlobalMatrixArray(EnvironmentDepthTextureProvider.Reprojection3DOFMatricesID));
            _computeShader.SetVector(EnvironmentDepthTextureProvider.ZBufferParamsID,
                Shader.GetGlobalVector(EnvironmentDepthTextureProvider.ZBufferParamsID));

            // See UniversalRenderPipelineCore for property IDs
            _computeShader.SetVector("_ZBufferParams", Shader.GetGlobalVector("_ZBufferParams"));
            _computeShader.SetMatrixArray("unity_StereoMatrixInvVP",
                Shader.GetGlobalMatrixArray("unity_StereoMatrixInvVP"));
        }

        private void OnDestroy()
        {
            _resultsCB.Release();
        }
    }

Now, you can simply call one of the two public functions to get the DepthRaycastResult and work with it.

Here's a simple example of placing _someObjectToPlace at the raycast hit position and rotating it according to the hit's normal:

void Update()
{
                // Raycasting at the controller anchor's position
                var worldSpaceCoordinate =
                  _leftControllerAnchor.transform.position + _leftControllerAnchor.transform.forward * 0.1f;

                // Convert world space to the view center in left eye's view coordinate system
                var viewSpaceCoordinate =
                  _centerEyeCamera.WorldToViewportPoint(worldSpaceCoordinate,
                    Camera.MonoOrStereoscopicEye.Left);

                // Perform a ray cast
                var raycastResult = _environmentDepthAccess.RaycastViewSpaceBlocking(viewSpaceCoordinate);

                // position some object on the ray hit
                _someObjectToPlace.transform.position = raycastResult.Position;

                // use normal to rotate the indicator (be aware that LookRotation takes forward not up direction)
                _someObjectToPlace.transform.rotation = Quaternion.LookRotation(raycastResult.Normal);
}

Note: if you look at the code you should notice that this is only supported by 3DOF implementation. What this means is that you need to set 3DOF to true in the depth texture provider:

image

@Orinion
Copy link

Orinion commented Jan 23, 2024

Hi, first of all thanks for sharing the example code! Sadly the raycast position is not working correctly for me, so i changed the shader to only compute the depth. The benefit of this is that it also works with BiRP. I uploaded the project if any one else wants to try out raycasting on the depth api.

@trev3d
Copy link

trev3d commented May 29, 2024

@dogadogan
Sorry to necropost, but I cobbled together a (not great but functional) 6DOF depth raycast script you can use here:
https://github.com/anaglyphs/lasertag/tree/master/Assets/Anaglyph/XRTemplate/DepthCast

You can see a video of it here:
https://x.com/trev3d/status/1794867907153059975

(Only works with URP I think, but it shouldn't be too hard to get working with BiRP if you need...)

@TudorJude
Copy link
Contributor

Starting with v71, we've released our official solution for depth raycasting and is part of MRUK. You can find more information on it here. There's also a sample within MRUK that showcases its usage here.

We strongly recommend using the official solution over the solutions mentioned in this thread moving forward.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants