Replies: 2 comments 5 replies
-
cc @lzhangzz |
Beta Was this translation helpful? Give feedback.
0 replies
-
@joshuafc we have plans to support passing external device memory to the API, but it's likely to be ready in v0.9 (the next next release) |
Beta Was this translation helpful? Give feedback.
5 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
In some case, we have image directly in gpu memory, such as call NVDEC to decode H264 stream, or call nvJPEG for JPEG decode.
But now, the C++ inference only can call
Apply
with anMat
object which cannot specify where the image data is in( CPU, or GPU 0, or GPU1 ?)In
mmdeploy/csrc/mmdeploy/apis/c/mmdeploy/common.cpp:mmdeploy_common_create_input
, I found anmmdeploy::Mat
is created withdevice
parameter hardcode to 'cpu'.Is there any plan to export the
device
parameter to high level API such asDetector::Apply
, so when inference, we can specify which memory space hold the image?Beta Was this translation helpful? Give feedback.
All reactions