Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to use your toolkit with onnxruntime-gpu with Linux Ubuntu #97

Closed
hongson23 opened this issue Nov 7, 2021 · 3 comments
Closed

How to use your toolkit with onnxruntime-gpu with Linux Ubuntu #97

hongson23 opened this issue Nov 7, 2021 · 3 comments

Comments

@hongson23
Copy link

Hello @DefTruth
Thank for your works. I tested some face detections in your toolkit and they work well base on CPU in Ubuntu 16.04
I would like to use GPU so I download onnxruntime-linux-x64-gpu-1.7.0.tgz
I followed your suggestion:
cp you-path-to-downloaded-or-built-onnxruntime/lib/onnxruntime lite.ai.toolkit/lib
and use the headers offer by this repo, I just let these directories unchanged only copy lib
but when I check the results so they did not run on GPU
Can you give me some suggestions?

@DefTruth
Copy link
Owner

DefTruth commented Nov 7, 2021

@hongson23 Hi, see the notes in lite/ort/core/ort_config.h

#ifndef LITE_AI_ORT_CORE_ORT_CONFIG_H
#define LITE_AI_ORT_CORE_ORT_CONFIG_H

#include "ort_defs.h"
#include "lite/lite.ai.headers.h"

#ifdef ENABLE_ONNXRUNTIME
#include "onnxruntime/core/session/onnxruntime_cxx_api.h"
/* Need to define USE_CUDA macro manually by users who want to
 * enable onnxruntime and lite.ai.toolkit with CUDA support. It
 * seems that the latest onnxruntime will no longer pre-defined the
 * USE_CUDA macro and just let the decision make by users
 * who really know the environments of running device.*/
// #define USE_CUDA
#  ifdef USE_CUDA
#include "onnxruntime/core/providers/cuda/cuda_provider_factory.h"
#  endif
#endif

namespace core {}

#endif //LITE_AI_ORT_CORE_ORT_CONFIG_H

define a USE_CUDA macro will enable OrtSessionOptionsAppendExecutionProvider_CUDA and try to run the model on GPU. see lite/ort/core/ort_handler.cpp

  // GPU compatiable.
  // OrtCUDAProviderOptions provider_options;
  // session_options.AppendExecutionProvider_CUDA(provider_options);
#ifdef USE_CUDA
  OrtSessionOptionsAppendExecutionProvider_CUDA(session_options, 0); // C API stable.
#endif
  // 1. session
  ort_session = new Ort::Session(ort_env, onnx_path, session_options);

I hope it helps ~

@hongson23
Copy link
Author

Hi @DefTruth
Thank for your help!
It works perfectly with cuda 11, cudnn 8, linux Ubuntu 16.04, libonnxruntime gpu 1.7

@SonwYang
Copy link

Hello!!!How to define a USE_CUDA macro? Can you give me a example? Thank you!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants