Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Use correct device for prior generation #2678

Open
wants to merge 1 commit into
base: main
Choose a base branch
from

Conversation

jakubhejhal
Copy link

Motivation

I was trying to export RTMDet model to ONNX using the torch2onnx function:

def torch2onnx(img: Any,

Which has a device parameter. But even though I set it to cpu, I was still getting cuda errors:

Process Process-2:
Traceback (most recent call last):
  File "/usr/lib/python3.10/multiprocessing/process.py", line 314, in _bootstrap
    self.run()
  File "/usr/lib/python3.10/multiprocessing/process.py", line 108, in run
    self._target(*self._args, **self._kwargs)
  File "/home/kubik/work/.venv/lib/python3.10/site-packages/mmdeploy/apis/core/pipeline_manager.py", line 107, in __call__
    ret = func(*args, **kwargs)
  File "/home/kubik/work/.venv/lib/python3.10/site-packages/mmdeploy/apis/pytorch2onnx.py", line 99, in torch2onnx
    export(
  File "/home/kubik/work/.venv/lib/python3.10/site-packages/mmdeploy/apis/core/pipeline_manager.py", line 356, in _wrap
    return self.call_function(func_name_, *args, **kwargs)
  File "/home/kubik/work/.venv/lib/python3.10/site-packages/mmdeploy/apis/core/pipeline_manager.py", line 326, in call_function
    return self.call_function_local(func_name, *args, **kwargs)
  File "/home/kubik/work/.venv/lib/python3.10/site-packages/mmdeploy/apis/core/pipeline_manager.py", line 275, in call_function_local
    return pipe_caller(*args, **kwargs)
  File "/home/kubik/work/.venv/lib/python3.10/site-packages/mmdeploy/apis/core/pipeline_manager.py", line 107, in __call__
    ret = func(*args, **kwargs)
  File "/home/kubik/work/.venv/lib/python3.10/site-packages/mmdeploy/apis/onnx/export.py", line 138, in export
    torch.onnx.export(
  File "/home/kubik/work/.venv/lib/python3.10/site-packages/torch/onnx/utils.py", line 506, in export
    _export(
  File "/home/kubik/work/.venv/lib/python3.10/site-packages/torch/onnx/utils.py", line 1548, in _export
    graph, params_dict, torch_out = _model_to_graph(
  File "/home/kubik/work/.venv/lib/python3.10/site-packages/mmdeploy/apis/onnx/optimizer.py", line 27, in model_to_graph__custom_optimizer
    graph, params_dict, torch_out = ctx.origin_func(*args, **kwargs)
  File "/home/kubik/work/.venv/lib/python3.10/site-packages/torch/onnx/utils.py", line 1113, in _model_to_graph
    graph, params, torch_out, module = _create_jit_graph(model, args)
  File "/home/kubik/work/.venv/lib/python3.10/site-packages/torch/onnx/utils.py", line 989, in _create_jit_graph
    graph, torch_out = _trace_and_get_graph_from_model(model, args)
  File "/home/kubik/work/.venv/lib/python3.10/site-packages/torch/onnx/utils.py", line 893, in _trace_and_get_graph_from_model
    trace_graph, torch_out, inputs_states = torch.jit._get_trace_graph(
  File "/home/kubik/work/.venv/lib/python3.10/site-packages/torch/jit/_trace.py", line 1268, in _get_trace_graph
    outs = ONNXTracedModule(f, strict, _force_outplace, return_inputs, _return_inputs_states)(*args, **kwargs)
  File "/home/kubik/work/.venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "/home/kubik/work/.venv/lib/python3.10/site-packages/torch/jit/_trace.py", line 127, in forward
    graph, out = torch._C._create_graph_by_tracing(
  File "/home/kubik/work/.venv/lib/python3.10/site-packages/torch/jit/_trace.py", line 118, in wrapper
    outs.append(self.inner(*trace_inputs))
  File "/home/kubik/work/.venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "/home/kubik/work/.venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1488, in _slow_forward
    result = self.forward(*input, **kwargs)
  File "/home/kubik/work/.venv/lib/python3.10/site-packages/mmdeploy/apis/onnx/export.py", line 123, in wrapper
    return forward(*arg, **kwargs)
  File "/home/kubik/work/.venv/lib/python3.10/site-packages/mmdeploy/codebase/mmdet/models/detectors/single_stage.py", line 85, in single_stage_detector__forward
    return __forward_impl(self, batch_inputs, data_samples=data_samples)
  File "/home/kubik/work/.venv/lib/python3.10/site-packages/mmdeploy/core/optimizers/function_marker.py", line 266, in g
    rets = f(*args, **kwargs)
  File "/home/kubik/work/.venv/lib/python3.10/site-packages/mmdeploy/codebase/mmdet/models/detectors/single_stage.py", line 23, in __forward_impl
    output = self.bbox_head.predict(x, data_samples, rescale=False)
  File "/home/kubik/work/.venv/lib/python3.10/site-packages/mmdet/models/dense_heads/base_dense_head.py", line 197, in predict
    predictions = self.predict_by_feat(
  File "/home/kubik/work/.venv/lib/python3.10/site-packages/mmdeploy/codebase/mmdet/models/dense_heads/rtmdet_ins_head.py", line 98, in rtmdet_ins_head__predict_by_feat
    return _nms_with_mask_static(self, priors, bboxes, scores,
  File "/home/kubik/work/.venv/lib/python3.10/site-packages/mmdeploy/codebase/mmdet/models/dense_heads/rtmdet_ins_head.py", line 151, in _nms_with_mask_static
    mask_logits = _mask_predict_by_feat_single(self, mask_feats, kernels,
  File "/home/kubik/work/.venv/lib/python3.10/site-packages/mmdeploy/codebase/mmdet/models/dense_heads/rtmdet_ins_head.py", line 165, in _mask_predict_by_feat_single
    coord = self.prior_generator.single_level_grid_priors(
  File "/home/kubik/work/.venv/lib/python3.10/site-packages/mmdet/models/task_modules/prior_generators/point_generator.py", line 206, in single_level_grid_priors
    shift_x = (torch.arange(0, feat_w, device=device) +
  File "/home/kubik/work/.venv/lib/python3.10/site-packages/torch/cuda/__init__.py", line 247, in _lazy_init
    torch._C._cuda_init()
RuntimeError: Found no NVIDIA driver on your system. Please check that you have an NVIDIA GPU and installed a driver from http://www.nvidia.com/Download/index.aspx

@CLAassistant
Copy link

CLAassistant commented Feb 24, 2024

CLA assistant check
All committers have signed the CLA.

@Priyanshu88
Copy link

Hi @jakubhejhal, I am facing below problem with rtmdet-ins-head. Any suggestion on how to resolve this?

    nms_with_mask_static_fn = mmdeploy_codebase_mmdet_models_dense_heads_rtmdet_ins_head_nms_with_mask_static_fn(bbox_head, cat_17, stack_3, sigmoid, cat_16, bbox_head_mask_head_projection, 100, 0.6, 0.05, 1000, 100, 0.5);  bbox_head = cat_17 = stack_3 = sigmoid = cat_16 = bbox_head_mask_head_projection = None
  File "/opt/conda/lib/python3.9/site-packages/mmdeploy/codebase/mmdet/models/dense_heads/rtmdet_ins_head.py", line 42, in nms_with_mask_static_fn
    mask_logits = _mask_predict_by_feat_single(self, mask_feats, kernels,
  File "/opt/conda/lib/python3.9/site-packages/mmdeploy/codebase/mmdet/models/dense_heads/rtmdet_ins_head.py", line 215, in _mask_predict_by_feat_single
    return mask_predict_by_feat_single_fn(self, mask_feat, kernels,priors)
  File "/opt/conda/lib/python3.9/site-packages/mmdeploy/codebase/mmdet/models/dense_heads/rtmdet_ins_head.py", line 57, in mask_predict_by_feat_single_fn
    coord = self.prior_generator.single_level_grid_priors(
  File "/opt/conda/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1265, in __getattr__
    raise AttributeError("'{}' object has no attribute '{}'".forma

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants