-
Notifications
You must be signed in to change notification settings - Fork 143
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
ComputeBuffer.BeginWrite/EndWrite adaptation #136
Comments
My understanding is that Apart from that, it would reduce memory pressure especially in multi 4K situations. I'll consider using it in future updates. |
I kept on looking for information about it and this thread on the forums seems to be a pretty good resource. Although the unity engineer states there's still much to be desired in this particular field. Seems you can define a buffer as cpu-write-only and gpu-read-only. I do not know enough about the deeper tech side of things but the unity staff also states the following which I think is a hit for the application in the NDI decoder:
|
Thanks for the information. It makes sense, and I'd like to deep-dive into it in future updates, but it wouldn't happen very soon. There are some reasons:
I want to keep this issue ticket open as "enhancement". |
This is more of a question(/hope to be discussion) as i know @keijiro knows his way around Unity‘s bleeding edge APIs :)
I noticed the call to ComputeBuffer.SetData() inside the frame decoding brings perfomance down when having multiple streams or higher resolutions as it stalls the main thread.
I had a look around and Unity offers some kind of experimental way to enable writing to the ComputeBuffer in an async manner: https://docs.unity3d.com/2020.1/Documentation/ScriptReference/ComputeBuffer.BeginWrite.html
I know 4K streams or similar are not your intended usecase but maybe that‘d be a viable optimization. Happy to hear your thoughts.
The text was updated successfully, but these errors were encountered: