-
Notifications
You must be signed in to change notification settings - Fork 3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Question: Inference from CUDA allocated memory #10180
Comments
Thanks! How would you go about using a pointer to GPU memory and bind it using IOBinding as input? Currently the IOBinding.BindInput has two overloads; using a FixedBufferOnnxValue or OrtMemoryAllocation - I'm unable to identify any way to instantiate any of these using a pointer to unmanaged memory on the GPU? Thanks in advance, |
How do you represent your pointer to a GPU memory in your C# code? |
I am interfacing with a Nvidia DeepStream application and retrieving the pointer as described in: API Using cudaMemcpy2D (API) I am able to copy the buffer to host device without problems.
Example:
I notice that OrtValue has a IntPtr option: However I don't see any way of passing an OrtValue to the inference session Run() or bind it with IOBinding. The Python API describes in the Docs (Scenario 2) binds the Thanks, |
This issue has been automatically marked as stale due to inactivity and will be closed in 7 days if no further activity occurs. If further support is needed, please provide an update and/or more details. |
Hi,
Is it possible to run inference (using CUDA provider) )from memory already allocated on CUDA without moving it from GPU to CPU and vice versa?
Could you provide an example of such implementation in C#?
Thanks,
/M
The text was updated successfully, but these errors were encountered: