Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Issue with ep:STVM, crash due to a null pointer #10212

Closed
xadupre opened this issue Jan 6, 2022 · 4 comments
Closed

Issue with ep:STVM, crash due to a null pointer #10212

xadupre opened this issue Jan 6, 2022 · 4 comments

Comments

@xadupre
Copy link
Member

xadupre commented Jan 6, 2022

Describe the bug
Experiment with ep:STVM fails due to a null pointer PR #10211.

System information

  • OS Platform and Distribution (e.g., Linux Ubuntu 16.04): linux ubuntu 18.04 (and WSL as well)
  • ONNX Runtime installed from (source or binary): source
  • ONNX Runtime version: 1.10.1
  • Python version: 3.8
  • Visual Studio version (if applicable):
  • GCC/Compiler version (if compiling from source):
  • CUDA/cuDNN version:
  • GPU model and memory:

To Reproduce

Step used to build:

git clone ...
cd onnxruntime
# fails because tvm is not build automatically but it does clone all submodules 
python3 ./tools/ci_build/build.py --config Release --skip_tests --build_wheel --parallel  --build_dir ./build/linux_cpu_stvm --use_stvm --build_shared_lib
cd cmake/external/tvm_update
cmake -B build -DCMAKE_BUILD_TYPE=Release -DUSE_LLVM=ON -DUSE_OPENMP=gnu -DUSE_MICRO=ON
cd build
make -j 4
cd ../../..
python3 ./tools/ci_build/build.py --config Release --skip_tests --build_wheel --parallel  --build_dir ./build/linux_cpu_stvm --use_stvm

export TVM_HOME=~/github/onnxruntime/cmake/external/tvm_update
export PYTHONPATH=~/github/onnxruntime/build/linux_cpu_stvm/Release/
export PYTHONPATH=$TVM_HOME/python:${PYTHONPATH}

Script to use:

import numpy
import onnxruntime

# wget https://github.com/onnx/models/raw/master/vision/classification/squeezenet/model/squeezenet1.1-7.onnx
onnx_file = "squeezenet1.1-7.onnx"
loca = onnxruntime.__file__
print("-ORT %r" % loca)
print("PROV: %r" % onnxruntime.get_available_providers())
print("ONNX %r" % onnx_file)

print("**************************")
if False:
    stvm_session = onnxruntime.InferenceSession(
        onnx_file, providers=["StvmExecutionProvider"])
print("**************************")


provider_options = dict(
    target="llvm -mcpu=core-avx2",  # create InferenceSession with no provider_option to get these values
    target_host="llvm -mcpu=core-avx2",
    opt_level=3,
    freeze_weights=True,
    tuning_file_path="",
    tuning_type="Ansor",
)

print("+ Creating CPU")
cpu_sess = onnxruntime.InferenceSession(
    onnx_file,
    providers=['CPUExecutionProvider'],
)

for i, inp in enumerate(cpu_sess.get_inputs()):
    print(i, inp.name, inp.type, inp.shape)

print("+ Creating STVM")
so = onnxruntime.SessionOptions()
so.log_severity_level = 0
so.log_verbosity_level = 0
so.graph_optimization_level = onnxruntime.GraphOptimizationLevel.ORT_DISABLE_ALL
stvm_session = onnxruntime.InferenceSession(
    onnx_file,
    so,
    providers=["StvmExecutionProvider"],
    provider_options=[provider_options]
)
print("+ done.")

data = numpy.random.randn(1, 3, 224, 224).astype(numpy.float32)

results = []
for i, sess in enumerate([cpu_sess, stvm_session]):
    if sess is None:
        continue
    print("START i=%d" % i)
    got = sess.run(['input'], {'input': data})[0]
    print("i=%d - %r" % (i, got.shape))
    results.append(got)

Output (error message coming from the PR #10211).

2022-01-06 19:58:42.051927700 [E:onnxruntime:, inference_session.cc:1449 operator()] Exception during initialization: /home/xadupre/github/onnxruntime/onnxruntime/core/providers/stvm/stvm_api.cc:39 tvm::runtime::Module stvm::TVMCompile(const string&, const string&, const string&, const string&, int, int, bool, const std::vector<std::vector<long int> >&, bool, const string&, const string&) compile != nullptr was false. 

Traceback (most recent call last):
  File "stvm_example.py", line 42, in <module>
    stvm_session = onnxruntime.InferenceSession(
  File "/home/xadupre/github/onnxruntime/build/linux_cpu_stvm/Release/onnxruntime/capi/onnxruntime_inference_collection.py", line 335, in __init__
    self._create_inference_session(providers, provider_options, disabled_optimizers)
  File "/home/xadupre/github/onnxruntime/build/linux_cpu_stvm/Release/onnxruntime/capi/onnxruntime_inference_collection.py", line 379, in _create_inference_session
    sess.initialize_session(providers, provider_options, disabled_optimizers)
onnxruntime.capi.onnxruntime_pybind11_state.RuntimeException: [ONNXRuntimeError] : 6 : RUNTIME_EXCEPTION : Exception during initialization: /home/xadupre/github/onnxruntime/onnxruntime/core/providers/stvm/stvm_api.cc:39 tvm::runtime::Module stvm::TVMCompile(const string&, const string&, const string&, const string&, int, int, bool, const std::vector<std::vector<long int> >&, bool, const string&, const string&) compile != nullptr was false. 
@xadupre xadupre added the ep:STMV label Jan 6, 2022
@binarybana
Copy link

Thanks @xadupre, we'll take a look and get back to you as soon as we can.

@KJlaccHoeUM9l
Copy link
Contributor

Hello @xadupre!

Working between the C++ and Python parts in TVM EP is done using the PackedFunc and Registry classes. In order to use a Python function in C++ code, it must be registered in the global table of functions (see onnxruntime/python/providers/stvm/ort.py).
Registration is carried out through the JIT interface, so it is necessary to call special functions for registration. To do this, you need to make the following import at the beginning of the file:

import onnxruntime.providers.stvm # nessesary to register tvm_onnx_import_and_compile and others

This should fix your problem.

@KJlaccHoeUM9l
Copy link
Contributor

We understand that this is not very user-friendly for the end user. Therefore, we opened PR to improve this moment.

@xadupre
Copy link
Member Author

xadupre commented Jan 11, 2022

Thanks. It worked. I'll looked at the PR.

@xadupre xadupre closed this as completed Jan 11, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants