Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Coverage] Runtime error related linear, reshape or slice ops #3183

Open
Tracked by #3179
chohk88 opened this issue Sep 26, 2024 · 0 comments
Open
Tracked by #3179

[Coverage] Runtime error related linear, reshape or slice ops #3183

chohk88 opened this issue Sep 26, 2024 · 0 comments

Comments

@chohk88
Copy link
Collaborator

chohk88 commented Sep 26, 2024

Traceback (most recent call last):
  File "/jsdasdet/assets/recipe/mixology-clara-monai-swinunetr__benchmarks-stable-pyt_perf-infer_--h100-pcie-80gb-_-1_gpus-1_bs--1_dld-synthetic_seed-42_-source/benchmark.py", line 372, in <module>
    CLI(benchmark)
    │   └ <function benchmark at 0x7fcca8c1fac0>
    └ <function CLI at 0x7fcca8b8ad40>
  File "/usr/local/lib/python3.10/dist-packages/jsonargparse/_cli.py", line 96, in CLI
    return _run_component(components, cfg_init)
           │              │           └ Namespace(cfg=Config(model_callable='monai.networks.nets.SwinUNETR', model_device=None, model_args=[], model_kwargs={'spatial...
           │              └ <function benchmark at 0x7fcca8c1fac0>
           └ <function _run_component at 0x7fcca8c1f0a0>
  File "/usr/local/lib/python3.10/dist-packages/jsonargparse/_cli.py", line 196, in _run_component
    return component(**cfg)
           │           └ Namespace(cfg=Config(model_callable='monai.networks.nets.SwinUNETR', model_device=None, model_args=[], model_kwargs={'spatial...
           └ <function benchmark at 0x7fcca8c1fac0>
  File "/jet/assets/recipe/mixology-clara-monai-swinunetr__benchmarks-stable-pyt_perf-infer_--h100-pcie-80gb-_-1_gpus-1_bs--1_dld-synthetic_seed-42_-source/benchmark.py", line 299, in benchmark
    nav.optimize(
    │   └ <function optimize at 0x7fcca8dc53f0>
    └ <module 'model_navigator' from '/usr/local/lib/python3.10/dist-packages/model_navigator/__init__.py'>
  File "/usr/local/lib/python3.10/dist-packages/model_navigator/inplace/__init__.py", line 118, in optimize
    module_registry.optimize()
    │               └ <function ModuleRegistry.optimize at 0x7fcca9e40f70>
    └ <model_navigator.inplace.registry.ModuleRegistry object at 0x7fcca9e47580>
  File "/usr/local/lib/python3.10/dist-packages/model_navigator/inplace/registry.py", line 73, in optimize
    module.optimize()
    │      └ <function Module.optimize at 0x7fcca8da9fc0>
    └ <Module at 0x7fcb89372180 for SwinUNETR at 0x7fcca8c282b0>
  File "/usr/local/lib/python3.10/dist-packages/model_navigator/inplace/wrapper.py", line 204, in optimize
    self._wrapper.optimize()
    └ <Module at 0x7fcb89372180 for SwinUNETR at 0x7fcca8c282b0>
  File "/usr/local/lib/python3.10/dist-packages/model_navigator/inplace/model.py", line 191, in optimize
    optimize(model=self._module, dataloader=TorchDataloader(samples), **updated_config_dict)
    │              │    │                   │               │           └ {'sample_count': 1, 'batching': True, 'input_names': None, 'output_names': None, 'target_formats': [<Format.TORCHSCRIPT: 'tor...
    │              │    │                   │               └ [PosixPath('/tmp/monai.networks.nets.swin_unetr.SwinUNETR_v5mr10lu/0.pt')]
    │              │    │                   └ <class 'model_navigator.inplace.utils.TorchDataloader'>
    │              │    └ SwinUNETR(
    │              │        (swinViT): SwinTransformer(
    │              │          (patch_embed): PatchEmbed(
    │              │            (proj): Conv3d(1, 12, kernel_size=(2, 2, 2), st...
    │              └ <model_navigator.inplace.model.RecordingModule object at 0x7fcb89234820>
    └ <function optimize at 0x7fcf5308b1c0>
  File "/usr/local/lib/python3.10/dist-packages/model_navigator/torch/__init__.py", line 152, in optimize
    package = optimize_pipeline(
              └ <function optimize_pipeline at 0x7fcca9e29630>
  File "/usr/local/lib/python3.10/dist-packages/model_navigator/pipelines/wrappers/optimize.py", line 73, in optimize_pipeline
    context = pipeline_manager.run(
              │                └ <function PipelineManager.run at 0x7fcca9e2a4d0>
              └ <model_navigator.pipelines.pipeline_manager.PipelineManager object at 0x7fcb89237010>
  File "/usr/local/lib/python3.10/dist-packages/model_navigator/pipelines/pipeline_manager.py", line 88, in run
    pipeline.run(workspace=workspace, config=config, context=context)
    │        │             │                 │               └ <model_navigator.pipelines.pipeline_context.PipelineContext object at 0x7fcb89237370>
    │        │             │                 └ CommonConfig(framework=<Framework.TORCH: 'torch'>, model=SwinUNETR(
    │        │             │                     (swinViT): SwinTransformer(
    │        │             │                       (patch_embed): PatchEmb...
    │        │             └ <model_navigator.core.workspace.Workspace object at 0x7fcb89235300>
    │        └ <function Pipeline.run at 0x7fccaa1f40d0>
    └ Correctness
  File "/usr/local/lib/python3.10/dist-packages/model_navigator/pipelines/pipeline.py", line 68, in run
    command_output = self._execute_unit(
                     │    └ <function Pipeline._execute_unit at 0x7fccaa1f4430>
                     └ Correctness
  File "/usr/local/lib/python3.10/dist-packages/model_navigator/pipelines/pipeline.py", line 121, in _execute_unit
    command_output = execution_unit.command().run(
                     │              └ Correctness
                     └ Cmd:Correctness, Config:torch, Runner:TorchTensorRTCompile
  File "/usr/local/lib/python3.10/dist-packages/model_navigator/commands/base.py", line 127, in run
    output = self._run(*args, **_filter_dict_for_func(kwargs, self._run))
             │    │     │       │                     │       │    └ <function Correctness._run at 0x7fccaa2abac0>
             │    │     │       │                     │       └ <model_navigator.commands.correctness.correctness.Correctness object at 0x7fca7a19c6a0>
             │    │     │       │                     └ {'framework': <Framework.TORCH: 'torch'>, 'model': SwinUNETR(
             │    │     │       │                         (swinViT): SwinTransformer(
             │    │     │       │                           (patch_embed): PatchEmbed(
             │    │     │       │                         ...
             │    │     │       └ <function Command.run.<locals>._filter_dict_for_func at 0x7fca62c939a0>
             │    │     └ ()
             │    └ <function Correctness._run at 0x7fccaa2abac0>
             └ <model_navigator.commands.correctness.correctness.Correctness object at 0x7fca7a19c6a0>
  File "/usr/local/lib/python3.10/dist-packages/model_navigator/commands/correctness/correctness.py", line 150, in _run
    context.execute_python_script(
    │       └ <function ExecutionContext.execute_python_script at 0x7fccaa2ab400>
    └ <model_navigator.commands.execution_context.ExecutionContext object at 0x7fca7ac3c850>
  File "/usr/local/lib/python3.10/dist-packages/model_navigator/commands/execution_context.py", line 142, in execute_python_script
    self._execute_function(func, unwrapped_args, allow_failure, cmd)
    │    │                 │     │               │              └ ['/bin/bash', 'torch/reproduce_correctness-torchtensorrtcompilerunner.sh']
    │    │                 │     │               └ False
    │    │                 │     └ ['--navigator_workspace', '/root/.cache/model_navigator/monai.networks.nets.swin_unetr.SwinUNETR/0', '--batch_dim', '0', '--r...
    │    │                 └ <function correctness at 0x7fcb52fd2ef0>
    │    └ <function ExecutionContext._execute_function at 0x7fccaa2ab490>
    └ <model_navigator.commands.execution_context.ExecutionContext object at 0x7fca7ac3c850>
  File "/usr/local/lib/python3.10/dist-packages/model_navigator/commands/execution_context.py", line 156, in _execute_function
    fire.Fire(func, unwrapped_args)
    │    │    │     └ ['--navigator_workspace', '/root/.cache/model_navigator/monai.networks.nets.swin_unetr.SwinUNETR/0', '--batch_dim', '0', '--r...
    │    │    └ <function correctness at 0x7fcb52fd2ef0>
    │    └ <function Fire at 0x7fccaa224040>
    └ <module 'fire' from '/usr/local/lib/python3.10/dist-packages/fire/__init__.py'>
  File "/usr/local/lib/python3.10/dist-packages/fire/core.py", line 141, in Fire
    component_trace = _Fire(component, args, parsed_flag_args, context, name)
                      │     │          │     │                 │        └ 'benchmark.py'
                      │     │          │     │                 └ {}
                      │     │          │     └ Namespace(verbose=False, interactive=False, separator='-', completion=None, help=False, trace=False)
                      │     │          └ ['--navigator_workspace', '/root/.cache/model_navigator/monai.networks.nets.swin_unetr.SwinUNETR/0', '--batch_dim', '0', '--r...
                      │     └ <function correctness at 0x7fcb52fd2ef0>
                      └ <function _Fire at 0x7fccaa280280>
  File "/usr/local/lib/python3.10/dist-packages/fire/core.py", line 466, in _Fire
    component, remaining_args = _CallAndUpdateTrace(
    │                           └ <function _CallAndUpdateTrace at 0x7fccaa2803a0>
    └ <function correctness at 0x7fcb52fd2ef0>
  File "/usr/local/lib/python3.10/dist-packages/fire/core.py", line 681, in _CallAndUpdateTrace
    component = fn(*varargs, **kwargs)
                │   │          └ {}
                │   └ [0, '/tmp/tmp2ly73qhm', 'TorchTensorRTCompile', {'metadata': [{'name': 'input__0', 'shape': (-1, 1, 64, 32, 192), 'dtype': 'f...
                └ <function correctness at 0x7fcb52fd2ef0>
  File "/usr/local/lib/python3.10/dist-packages/model_navigator/commands/correctness/correctness_script.py", line 93, in correctness
    comp_output = runner.infer(sample)
                  │      │     └ {'input__0': array([[[[[0., 0., 0., ..., 0., 0., 0.],
                  │      │                 [0., 0., 0., ..., 0., 0., 0.],
                  │      │                 [0., 0., 0., ..., 0....
                  │      └ <function NavigatorRunner.infer at 0x7fcf53059480>
                  └ <model_navigator_custom_runners.torch_trt_compile.runner.TorchTensorRTCompileRunner object at 0x7fca68a4c160>
  File "/usr/local/lib/python3.10/dist-packages/model_navigator/runners/base.py", line 325, in infer
    output = self.infer_impl(feed_dict, *args, **kwargs)
             │    │          │           │       └ {}
             │    │          │           └ ()
             │    │          └ {'input__0': array([[[[[0., 0., 0., ..., 0., 0., 0.],
             │    │                      [0., 0., 0., ..., 0., 0., 0.],
             │    │                      [0., 0., 0., ..., 0....
             │    └ <function _BaseTorchRunner.infer_impl at 0x7fcf5307fbe0>
             └ <model_navigator_custom_runners.torch_trt_compile.runner.TorchTensorRTCompileRunner object at 0x7fca68a4c160>
  File "/usr/local/lib/python3.10/dist-packages/model_navigator/runners/torch.py", line 94, in infer_impl
    outputs = self._infer(feed_dict=feed_dict)
              │    │                └ {'input__0': array([[[[[0., 0., 0., ..., 0., 0., 0.],
              │    │                            [0., 0., 0., ..., 0., 0., 0.],
              │    │                            [0., 0., 0., ..., 0....
              │    └ <bound method _BaseTorchRunner._infer_v1 of <model_navigator_custom_runners.torch_trt_compile.runner.TorchTensorRTCompileRunn...
              └ <model_navigator_custom_runners.torch_trt_compile.runner.TorchTensorRTCompileRunner object at 0x7fca68a4c160>
  File "/usr/local/lib/python3.10/dist-packages/model_navigator/runners/torch.py", line 135, in _infer_v1
    outputs = self._loaded_model(*args, **kwargs)
              │    │              │       └ {}
              │    │              └ (tensor([[[[[0., 0., 0.,  ..., 0., 0., 0.],
              │    │                           [0., 0., 0.,  ..., 0., 0., 0.],
              │    │                           [0., 0., 0.,  ..., 0., 0., ...
              │    └ OptimizedModule(
              │        (_orig_mod): SwinUNETR(
              │          (swinViT): SwinTransformer(
              │            (patch_embed): PatchEmbed(
              │              (proj): C...
              └ <model_navigator_custom_runners.torch_trt_compile.runner.TorchTensorRTCompileRunner object at 0x7fca68a4c160>
  File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1714, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
           │    │           │       └ {}
           │    │           └ (tensor([[[[[0., 0., 0.,  ..., 0., 0., 0.],
           │    │                        [0., 0., 0.,  ..., 0., 0., 0.],
           │    │                        [0., 0., 0.,  ..., 0., 0., ...
           │    └ <function Module._call_impl at 0x7fcf707a0040>
           └ OptimizedModule(
               (_orig_mod): SwinUNETR(
                 (swinViT): SwinTransformer(
                   (patch_embed): PatchEmbed(
                     (proj): C...
  File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1725, in _call_impl
    return forward_call(*args, **kwargs)
           │             │       └ {}
           │             └ (tensor([[[[[0., 0., 0.,  ..., 0., 0., 0.],
           │                          [0., 0., 0.,  ..., 0., 0., 0.],
           │                          [0., 0., 0.,  ..., 0., 0., ...
           └ <function Module._wrapped_call_impl at 0x7fca6088c430>
  File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/eval_frame.py", line 434, in _fn
    return fn(*args, **kwargs)
           │   │       └ {}
           │   └ (tensor([[[[[0., 0., 0.,  ..., 0., 0., 0.],
           │                [0., 0., 0.,  ..., 0., 0., 0.],
           │                [0., 0., 0.,  ..., 0., 0., ...
           └ <bound method Module._wrapped_call_impl of SwinUNETR(
               (swinViT): SwinTransformer(
                 (patch_embed): PatchEmbed(
                   (pro...
  File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1714, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
           │    │           │       └ {}
           │    │           └ (tensor([[[[[0., 0., 0.,  ..., 0., 0., 0.],
           │    │                        [0., 0., 0.,  ..., 0., 0., 0.],
           │    │                        [0., 0., 0.,  ..., 0., 0., ...
           │    └ <function Module._call_impl at 0x7fcf707a0040>
           └ SwinUNETR(
               (swinViT): SwinTransformer(
                 (patch_embed): PatchEmbed(
                   (proj): Conv3d(1, 12, kernel_size=(2, 2, 2), st...
  File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1725, in _call_impl
    return forward_call(*args, **kwargs)
           │             │       └ {}
           │             └ (tensor([[[[[0., 0., 0.,  ..., 0., 0., 0.],
           │                          [0., 0., 0.,  ..., 0., 0., 0.],
           │                          [0., 0., 0.,  ..., 0., 0., ...
           └ <bound method SwinUNETR.forward of SwinUNETR(
               (swinViT): SwinTransformer(
                 (patch_embed): PatchEmbed(
                   (proj): Conv...
  File "/usr/local/lib/python3.10/dist-packages/monai/networks/nets/swin_unetr.py", line 324, in forward
    self._check_input_size(x_in.shape[2:])
    │    │                 │    └ <attribute 'shape' of 'torch._C.TensorBase' objects>
    │    │                 └ tensor([[[[[0., 0., 0.,  ..., 0., 0., 0.],
    │    │                              [0., 0., 0.,  ..., 0., 0., 0.],
    │    │                              [0., 0., 0.,  ..., 0., 0., 0...
    │    └ <function SwinUNETR._check_input_size at 0x7fcb8a2305e0>
    └ SwinUNETR(
        (swinViT): SwinTransformer(
          (patch_embed): PatchEmbed(
            (proj): Conv3d(1, 12, kernel_size=(2, 2, 2), st...
  File "/usr/local/lib/python3.10/dist-packages/monai/networks/nets/swin_unetr.py", line 325, in torch_dynamo_resume_in_forward_at_324
    hidden_states_out = self.swinViT(x_in, self.normalize)
                        │            │     │    └ True
                        │            │     └ SwinUNETR(
                        │            │         (swinViT): SwinTransformer(
                        │            │           (patch_embed): PatchEmbed(
                        │            │             (proj): Conv3d(1, 12, kernel_size=(2, 2, 2), st...
                        │            └ tensor([[[[[0., 0., 0.,  ..., 0., 0., 0.],
                        │                         [0., 0., 0.,  ..., 0., 0., 0.],
                        │                         [0., 0., 0.,  ..., 0., 0., 0...
                        └ SwinUNETR(
                            (swinViT): SwinTransformer(
                              (patch_embed): PatchEmbed(
                                (proj): Conv3d(1, 12, kernel_size=(2, 2, 2), st...
  File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1714, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
           │    │           │       └ {}
           │    │           └ (tensor([[[[[0., 0., 0.,  ..., 0., 0., 0.],
           │    │                        [0., 0., 0.,  ..., 0., 0., 0.],
           │    │                        [0., 0., 0.,  ..., 0., 0., ...
           │    └ <function Module._call_impl at 0x7fcf707a0040>
           └ SwinTransformer(
               (patch_embed): PatchEmbed(
                 (proj): Conv3d(1, 12, kernel_size=(2, 2, 2), stride=(2, 2, 2))
               )
               (pos_d...
  File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1725, in _call_impl
    return forward_call(*args, **kwargs)
           │             │       └ {}
           │             └ (tensor([[[[[0., 0., 0.,  ..., 0., 0., 0.],
           │                          [0., 0., 0.,  ..., 0., 0., 0.],
           │                          [0., 0., 0.,  ..., 0., 0., ...
           └ <bound method SwinTransformer.forward of SwinTransformer(
               (patch_embed): PatchEmbed(
                 (proj): Conv3d(1, 12, kernel_size=...
  File "/usr/local/lib/python3.10/dist-packages/monai/networks/nets/swin_unetr.py", line 1067, in forward
    x1 = self.layers1[0](x0.contiguous())
         └ SwinTransformer(
             (patch_embed): PatchEmbed(
               (proj): Conv3d(1, 12, kernel_size=(2, 2, 2), stride=(2, 2, 2))
             )
             (pos_d...
  File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1714, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
           │    │           │       └ {}
           │    │           └ (tensor([[[[[-0.1676, -0.1676, -0.1676,  ..., -0.1676, -0.1676, -0.1676],
           │    │                        [-0.1676, -0.1676, -0.1676,  ..., -0.167...
           │    └ <function Module._call_impl at 0x7fcf707a0040>
           └ BasicLayer(
               (blocks): ModuleList(
                 (0-1): 2 x SwinTransformerBlock(
                   (norm1): LayerNorm((12,), eps=1e-05, elementwi...
  File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1725, in _call_impl
    return forward_call(*args, **kwargs)
           │             │       └ {}
           │             └ (tensor([[[[[-0.1676, -0.1676, -0.1676,  ..., -0.1676, -0.1676, -0.1676],
           │                          [-0.1676, -0.1676, -0.1676,  ..., -0.167...
           └ <bound method BasicLayer.forward of BasicLayer(
               (blocks): ModuleList(
                 (0-1): 2 x SwinTransformerBlock(
                   (norm1): L...
  File "/usr/local/lib/python3.10/dist-packages/monai/networks/nets/swin_unetr.py", line 898, in forward
    dp = int(np.ceil(d / window_size[0])) * window_size[0]
             │  │    │   │                  └ (7, 7, 7)
             │  │    │   └ (7, 7, 7)
             │  │    └ 32
             │  └ <ufunc 'ceil'>
             └ <module 'numpy' from '/usr/local/lib/python3.10/dist-packages/numpy/__init__.py'>
  File "/usr/local/lib/python3.10/dist-packages/monai/networks/nets/swin_unetr.py", line 899, in torch_dynamo_resume_in_forward_at_898
    hp = int(np.ceil(h / window_size[1])) * window_size[1]
             │  │    │   │                  └ (7, 7, 7)
             │  │    │   └ (7, 7, 7)
             │  │    └ 16
             │  └ <ufunc 'ceil'>
             └ <module 'numpy' from '/usr/local/lib/python3.10/dist-packages/numpy/__init__.py'>
  File "/usr/local/lib/python3.10/dist-packages/monai/networks/nets/swin_unetr.py", line 900, in torch_dynamo_resume_in_forward_at_899
    wp = int(np.ceil(w / window_size[2])) * window_size[2]
             │  │    │   │                  └ (7, 7, 7)
             │  │    │   └ (7, 7, 7)
             │  │    └ 96
             │  └ <ufunc 'ceil'>
             └ <module 'numpy' from '/usr/local/lib/python3.10/dist-packages/numpy/__init__.py'>
  File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/convert_frame.py", line 1121, in __call__
    return self._torchdynamo_orig_callable(
           │    └ <torch._dynamo.convert_frame.ConvertFrame object at 0x7fca68a4c1c0>
           └ <torch._dynamo.convert_frame.CatchErrorsWrapper object at 0x7fca68a4cb50>
  File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/convert_frame.py", line 948, in __call__
    result = self._inner_convert(
             │    └ <torch._dynamo.convert_frame.ConvertFrameAssert object at 0x7fca68a4ce50>
             └ <torch._dynamo.convert_frame.ConvertFrame object at 0x7fca68a4c1c0>
  File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/convert_frame.py", line 472, in __call__
    return _compile(
           └ <function _compile at 0x7fcf4395a7a0>
  File "/usr/local/lib/python3.10/dist-packages/torch/_utils_internal.py", line 85, in wrapper_function
    return StrobelightCompileTimeProfiler.profile_compile_time(
           │                              └ <classmethod(<function StrobelightCompileTimeProfiler.profile_compile_time at 0x7fd044a2cf70>)>
           └ <class 'torch._strobelight.compile_time_profiler.StrobelightCompileTimeProfiler'>
  File "/usr/local/lib/python3.10/dist-packages/torch/_strobelight/compile_time_profiler.py", line 129, in profile_compile_time
    return func(*args, **kwargs)
           │     │       └ {'frame_state': {'_id': 8, "L['___stack0']": FrameStateSizeEntry(scalar=14, size=None), "L['window_size'][2]": FrameStateSize...
           │     └ (<code object torch_dynamo_resume_in_forward_at_900 at 0x7fca61cb0a80, file "/usr/local/lib/python3.10/dist-packages/monai/ne...
           └ <function _compile at 0x7fcf4395a560>
  File "/usr/lib/python3.10/contextlib.py", line 79, in inner
    return func(*args, **kwds)
           │     │       └ {'frame_state': {'_id': 8, "L['___stack0']": FrameStateSizeEntry(scalar=14, size=None), "L['window_size'][2]": FrameStateSize...
           │     └ (<code object torch_dynamo_resume_in_forward_at_900 at 0x7fca61cb0a80, file "/usr/local/lib/python3.10/dist-packages/monai/ne...
           └ <function _compile at 0x7fcf4395a4d0>
  File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/convert_frame.py", line 817, in _compile
    guarded_code = compile_inner(code, one_graph, hooks, transform)
                   │             │     │          │      └ <function _compile.<locals>.transform at 0x7fca62c92050>
                   │             │     │          └ Hooks(guard_export_fn=None, guard_fail_fn=None)
                   │             │     └ False
                   │             └ <code object torch_dynamo_resume_in_forward_at_900 at 0x7fca61cb0a80, file "/usr/local/lib/python3.10/dist-packages/monai/net...
                   └ <function _compile.<locals>.compile_inner at 0x7fca624643a0>
  File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/utils.py", line 233, in time_wrapper
    r = func(*args, **kwargs)
        │     │       └ {}
        │     └ (<code object torch_dynamo_resume_in_forward_at_900 at 0x7fca61cb0a80, file "/usr/local/lib/python3.10/dist-packages/monai/ne...
        └ <function _compile.<locals>.compile_inner at 0x7fca624644c0>
  File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/convert_frame.py", line 636, in compile_inner
    out_code = transform_code_object(code, transform)
               │                     │     └ <function _compile.<locals>.transform at 0x7fca62c92050>
               │                     └ <code object torch_dynamo_resume_in_forward_at_900 at 0x7fca61cb0a80, file "/usr/local/lib/python3.10/dist-packages/monai/net...
               └ <function transform_code_object at 0x7fcf46b81a20>
  File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/bytecode_transformation.py", line 1270, in transform_code_object
    transformations(instructions, code_options)
    │               │             └ {'co_argcount': 11, 'co_posonlyargcount': 0, 'co_kwonlyargcount': 0, 'co_nlocals': 16, 'co_stacksize': 6, 'co_flags': 1677728...
    │               └ [Instruction(opcode=124, opname='LOAD_FAST', arg=0, argval='___stack0', offset=0, starts_line=900, is_jump_target=False, posi...
    └ <function _compile.<locals>.transform at 0x7fca62c92050>
  File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/convert_frame.py", line 178, in _fn
    return fn(*args, **kwargs)
           │   │       └ {}
           │   └ ([Instruction(opcode=124, opname='LOAD_FAST', arg=0, argval='___stack0', offset=0, starts_line=900, is_jump_target=False, pos...
           └ <function _compile.<locals>.transform at 0x7fca62c91e10>
  File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/convert_frame.py", line 582, in transform
    tracer.run()
    │      └ <function InstructionTranslator.run at 0x7fcf4395f490>
    └ <torch._dynamo.symbolic_convert.InstructionTranslator object at 0x7fca69e6eaa0>
  File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/symbolic_convert.py", line 2476, in run
    super().run()
  File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/symbolic_convert.py", line 904, in run
    while self.step():
          │    └ <function InstructionTranslatorBase.step at 0x7fcf43950f70>
          └ <torch._dynamo.symbolic_convert.InstructionTranslator object at 0x7fca69e6eaa0>
  File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/symbolic_convert.py", line 816, in step
    self.dispatch_table[inst.opcode](self, inst)
    │    │              │    │       │     └ Instruction(opcode=83, opname='RETURN_VALUE', arg=None, argval=None, offset=258, starts_line=922, is_jump_target=False, posit...
    │    │              │    │       └ <torch._dynamo.symbolic_convert.InstructionTranslator object at 0x7fca69e6eaa0>
    │    │              │    └ 83
    │    │              └ Instruction(opcode=83, opname='RETURN_VALUE', arg=None, argval=None, offset=258, starts_line=922, is_jump_target=False, posit...
    │    └ [None, <function InstructionTranslatorBase.POP_TOP at 0x7fcf43958790>, <function InstructionTranslatorBase.ROT_TWO at 0x7fcf4...
    └ <torch._dynamo.symbolic_convert.InstructionTranslator object at 0x7fca69e6eaa0>
  File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/symbolic_convert.py", line 2667, in RETURN_VALUE
    self._return(inst)
    │    │       └ Instruction(opcode=83, opname='RETURN_VALUE', arg=None, argval=None, offset=258, starts_line=922, is_jump_target=False, posit...
    │    └ <function InstructionTranslator._return at 0x7fcf4395f760>
    └ <torch._dynamo.symbolic_convert.InstructionTranslator object at 0x7fca69e6eaa0>
  File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/symbolic_convert.py", line 2652, in _return
    self.output.compile_subgraph(
    │    │      └ <function OutputGraph.compile_subgraph at 0x7fcf439369e0>
    │    └ <torch._dynamo.output_graph.OutputGraph object at 0x7fca1c312f50>
    └ <torch._dynamo.symbolic_convert.InstructionTranslator object at 0x7fca69e6eaa0>
  File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/output_graph.py", line 1102, in compile_subgraph
    self.compile_and_call_fx_graph(tx, list(reversed(stack_values)), root)
    │    │                         │                 │               └ FakeRootModule(...)
    │    │                         │                 └ [TensorVariable()]
    │    │                         └ <torch._dynamo.symbolic_convert.InstructionTranslator object at 0x7fca69e6eaa0>
    │    └ <function OutputGraph.compile_and_call_fx_graph at 0x7fcf43936e60>
    └ <torch._dynamo.output_graph.OutputGraph object at 0x7fca1c312f50>
  File "/usr/lib/python3.10/contextlib.py", line 79, in inner
    return func(*args, **kwds)
           │     │       └ {}
           │     └ (<torch._dynamo.output_graph.OutputGraph object at 0x7fca1c312f50>, <torch._dynamo.symbolic_convert.InstructionTranslator obj...
           └ <function OutputGraph.compile_and_call_fx_graph at 0x7fcf43936dd0>
  File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/output_graph.py", line 1324, in compile_and_call_fx_graph
    compiled_fn = self.call_user_compiler(gm)
                  │    │                  └ GraphModule(
                  │    │                      (wrap_body_0): GraphModule()
                  │    │                      (L__self___blocks_0_drop_path): Identity()
                  │    │                      (wrap_body_1): GraphModule()
                  │    │                      (wr...
                  │    └ <function OutputGraph.call_user_compiler at 0x7fcf43937130>
                  └ <torch._dynamo.output_graph.OutputGraph object at 0x7fca1c312f50>
  File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/utils.py", line 233, in time_wrapper
    r = func(*args, **kwargs)
        │     │       └ {}
        │     └ (<torch._dynamo.output_graph.OutputGraph object at 0x7fca1c312f50>, GraphModule(
        │         (wrap_body_0): GraphModule()
        │         (L__self___...
        └ <function OutputGraph.call_user_compiler at 0x7fcf439370a0>
  File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/output_graph.py", line 1396, in call_user_compiler
    compiled_fn = compiler_fn(gm, self.example_inputs())
                  │           │   │    └ <function OutputGraph.example_inputs at 0x7fcf43937010>
                  │           │   └ <torch._dynamo.output_graph.OutputGraph object at 0x7fca1c312f50>
                  │           └ GraphModule(
                  │               (wrap_body_0): GraphModule()
                  │               (L__self___blocks_0_drop_path): Identity()
                  │               (wrap_body_1): GraphModule()
                  │               (wr...
                  └ <torch._dynamo.repro.after_dynamo.WrapBackendDebug object at 0x7fca68a4d7e0>
  File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/repro/after_dynamo.py", line 129, in __call__
    compiled_gm = compiler_fn(gm, example_inputs)
                  │           │   └ [tensor([[[[[-0.1676,  0.0293,  0.0904,  ..., -0.1300,  0.0347, -0.3494],
                  │           │                [-0.1676,  0.0293,  0.0904,  ..., -0.130...
                  │           └ GraphModule(
                  │               (wrap_body_0): GraphModule()
                  │               (L__self___blocks_0_drop_path): Identity()
                  │               (wrap_body_1): GraphModule()
                  │               (wr...
                  └ functools.partial(<torch._TorchCompileWrapper object at 0x7fca68a4d420>)
  File "/usr/local/lib/python3.10/dist-packages/torch/__init__.py", line 2223, in __call__
    return self.compiler_fn(model_, inputs_, **self.kwargs)
           │    │           │       │          │    └ {'options': {'truncate_long_and_double': True, 'enabled_precisions': {torch.float32, torch.float16}}}
           │    │           │       │          └ <torch._TorchCompileWrapper object at 0x7fca68a4d420>
           │    │           │       └ [tensor([[[[[-0.1676,  0.0293,  0.0904,  ..., -0.1300,  0.0347, -0.3494],
           │    │           │                    [-0.1676,  0.0293,  0.0904,  ..., -0.130...
           │    │           └ GraphModule(
           │    │               (wrap_body_0): GraphModule()
           │    │               (L__self___blocks_0_drop_path): Identity()
           │    │               (wrap_body_1): GraphModule()
           │    │               (wr...
           │    └ <function torch_tensorrt_backend at 0x7fccab4b3640>
           └ <torch._TorchCompileWrapper object at 0x7fca68a4d420>
  File "/usr/local/lib/python3.10/dist-packages/torch_tensorrt/dynamo/backend/backends.py", line 44, in torch_tensorrt_backend
    return DEFAULT_BACKEND(gm, sample_inputs, **kwargs)
           │               │   │                └ {'options': {'truncate_long_and_double': True, 'enabled_precisions': {torch.float32, torch.float16}}}
           │               │   └ [tensor([[[[[-0.1676,  0.0293,  0.0904,  ..., -0.1300,  0.0347, -0.3494],
           │               │                [-0.1676,  0.0293,  0.0904,  ..., -0.130...
           │               └ GraphModule(
           │                   (wrap_body_0): GraphModule()
           │                   (L__self___blocks_0_drop_path): Identity()
           │                   (wrap_body_1): GraphModule()
           │                   (wr...
           └ <function aot_torch_tensorrt_aten_backend at 0x7fccab4b36d0>
  File "/usr/local/lib/python3.10/dist-packages/torch_tensorrt/dynamo/backend/backends.py", line 52, in aot_torch_tensorrt_aten_backend
    return _pretraced_backend(gm, sample_inputs, settings)
           │                  │   │              └ CompilationSettings(enabled_precisions={<dtype.f16: 6>, <dtype.f32: 7>}, debug=False, workspace_size=0, min_block_size=5, tor...
           │                  │   └ [tensor([[[[[-0.1676,  0.0293,  0.0904,  ..., -0.1300,  0.0347, -0.3494],
           │                  │                [-0.1676,  0.0293,  0.0904,  ..., -0.130...
           │                  └ GraphModule(
           │                      (wrap_body_0): GraphModule()
           │                      (L__self___blocks_0_drop_path): Identity()
           │                      (wrap_body_1): GraphModule()
           │                      (wr...
           └ <function _pretraced_backend at 0x7fccab4b3760>
> File "/usr/local/lib/python3.10/dist-packages/torch_tensorrt/dynamo/backend/backends.py", line 90, in _pretraced_backend
    gm = aot_export_joint_simple(
         └ <function aot_export_joint_simple at 0x7fcd35174ca0>
  File "/usr/local/lib/python3.10/dist-packages/torch/_functorch/aot_autograd.py", line 1267, in aot_export_joint_simple
    fx_g, metadata, in_spec, out_spec = _aot_export_function(
                                        └ <function _aot_export_function at 0x7fcd35174dc0>
  File "/usr/local/lib/python3.10/dist-packages/torch/_functorch/aot_autograd.py", line 1382, in _aot_export_function
    fx_g, meta = create_aot_dispatcher_function(
                 └ <function create_aot_dispatcher_function at 0x7fcd35174a60>
  File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/utils.py", line 233, in time_wrapper
    r = func(*args, **kwargs)
        │     │       └ {}
        │     └ (<function create_tree_flattened_fn.<locals>.flat_fn at 0x7fca6977ab00>, [tensor([[[[[-0.1676,  0.0293,  0.0904,  ..., -0.130...
        └ <function create_aot_dispatcher_function at 0x7fcd3527e440>
  File "/usr/local/lib/python3.10/dist-packages/torch/_functorch/aot_autograd.py", line 708, in create_aot_dispatcher_function
    compiled_fn, fw_metadata = compiler_fn(
                               └ functools.partial(<function aot_dispatch_export at 0x7fcd351a8430>, needs_autograd=False)
  File "/usr/local/lib/python3.10/dist-packages/torch/_functorch/_aot_autograd/jit_compile_runtime_wrappers.py", line 102, in aot_dispatch_export
    graph, _, _ = aot_dispatch_base_graph(
                  └ <function aot_dispatch_base_graph at 0x7fcd351a8280>
  File "/usr/local/lib/python3.10/dist-packages/torch/_functorch/_aot_autograd/dispatch_and_compile_graph.py", line 138, in aot_dispatch_base_graph
    fw_module = _create_graph(
                └ <function _create_graph at 0x7fcd351a81f0>
  File "/usr/local/lib/python3.10/dist-packages/torch/_functorch/_aot_autograd/dispatch_and_compile_graph.py", line 46, in _create_graph
    fx_g = make_fx(
           └ <function make_fx at 0x7fcf69a59750>
  File "/usr/local/lib/python3.10/dist-packages/torch/fx/experimental/proxy_tensor.py", line 1428, in wrapped
    return make_fx_tracer.trace(f, *args)
           │              │     │   └ (FakeTensor(..., device='cuda:0', size=(1, 32, 16, 96, 12), dtype=torch.float16),)
           │              │     └ <function create_tree_flattened_fn.<locals>.flat_fn at 0x7fca6a314a60>
           │              └ <function _MakefxTracer.trace at 0x7fcf69a595a0>
           └ <torch.fx.experimental.proxy_tensor._MakefxTracer object at 0x7fca68d93010>
  File "/usr/local/lib/python3.10/dist-packages/torch/fx/experimental/proxy_tensor.py", line 1374, in trace
    return self._trace_inner(f, *args)
           │    │            │   └ (FakeTensor(..., device='cuda:0', size=(1, 32, 16, 96, 12), dtype=torch.float16),)
           │    │            └ <function create_tree_flattened_fn.<locals>.flat_fn at 0x7fca6a314a60>
           │    └ <function _MakefxTracer._trace_inner at 0x7fcf69a59510>
           └ <torch.fx.experimental.proxy_tensor._MakefxTracer object at 0x7fca68d93010>
  File "/usr/local/lib/python3.10/dist-packages/torch/fx/experimental/proxy_tensor.py", line 1361, in _trace_inner
    t = dispatch_trace(
        └ <function dispatch_trace at 0x7fcf69a2bc70>
  File "/usr/local/lib/python3.10/dist-packages/torch/_compile.py", line 31, in inner
    return disable_fn(*args, **kwargs)
           │           │       └ {'tracer': <torch.fx.experimental.proxy_tensor.PythonKeyTracer object at 0x7fca7b0ff3a0>, 'concrete_args': (PH,)}
           │           └ (<function <lambda> at 0x7fca6a314310>,)
           └ <function dispatch_trace at 0x7fcb501535b0>
  File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/eval_frame.py", line 601, in _fn
    return fn(*args, **kwargs)
           │   │       └ {'tracer': <torch.fx.experimental.proxy_tensor.PythonKeyTracer object at 0x7fca7b0ff3a0>, 'concrete_args': (PH,)}
           │   └ (<function <lambda> at 0x7fca6a314310>,)
           └ <function dispatch_trace at 0x7fcf69a2bbe0>
  File "/usr/local/lib/python3.10/dist-packages/torch/fx/experimental/proxy_tensor.py", line 642, in dispatch_trace
    graph = tracer.trace(root, concrete_args)
            │      │     │     └ (PH,)
            │      │     └ <function <lambda> at 0x7fca6a314310>
            │      └ <function Tracer.trace at 0x7fcd1da71f30>
            └ <torch.fx.experimental.proxy_tensor.PythonKeyTracer object at 0x7fca7b0ff3a0>
  File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/eval_frame.py", line 601, in _fn
    return fn(*args, **kwargs)
           │   │       └ {}
           │   └ (<torch.fx.experimental.proxy_tensor.PythonKeyTracer object at 0x7fca7b0ff3a0>, <function <lambda> at 0x7fca6a314310>, (PH,))
           └ <function Tracer.trace at 0x7fcf69cf8280>
  File "/usr/local/lib/python3.10/dist-packages/torch/fx/_symbolic_trace.py", line 822, in trace
    (self.create_arg(fn(*args)),),
     │    │          │   └ [Proxy(arg0_1)]
     │    │          └ <function <lambda> at 0x7fca6a314310>
     │    └ <function PythonKeyTracer.create_arg at 0x7fcf69a2ba30>
     └ <torch.fx.experimental.proxy_tensor.PythonKeyTracer object at 0x7fca7b0ff3a0>
  File "/usr/local/lib/python3.10/dist-packages/torch/fx/experimental/proxy_tensor.py", line 660, in wrapped
    out = f(*tensors)
          │  └ (FakeTensor(..., device='cuda:0', size=(1, 32, 16, 96, 12), dtype=torch.float16),)
          └ <function <lambda> at 0x7fca6a315c60>
  File "<string>", line 1, in <lambda>
  File "/usr/local/lib/python3.10/dist-packages/torch/_functorch/_aot_autograd/traced_function_transforms.py", line 389, in _functionalized_f_helper
    f_outs = fn(*f_args)
             │   └ (FunctionalTensor(_to_functional_tensor(FakeTensor(..., device='cuda:0', size=(1, 32, 16, 96, 12), dtype=torch.float16),
             │         ...
             └ <function create_tree_flattened_fn.<locals>.flat_fn at 0x7fca6a315ea0>
  File "/usr/local/lib/python3.10/dist-packages/torch/_functorch/_aot_autograd/traced_function_transforms.py", line 73, in inner_fn
    outs = fn(*args)
           │   └ (FunctionalTensor(_to_functional_tensor(FakeTensor(..., device='cuda:0', size=(1, 32, 16, 96, 12), dtype=torch.float16),
           │         ...
           └ <function create_tree_flattened_fn.<locals>.flat_fn at 0x7fca6977ab00>
  File "/usr/local/lib/python3.10/dist-packages/torch/_functorch/_aot_autograd/utils.py", line 178, in flat_fn
    tree_out = fn(*args, **kwargs)
               │   │       └ {}
               │   └ [FunctionalTensor(_to_functional_tensor(FakeTensor(..., device='cuda:0', size=(1, 32, 16, 96, 12), dtype=torch.float16),
               │         ...
               └ GraphModule(
                   (wrap_body_0): GraphModule()
                   (L__self___blocks_0_drop_path): Identity()
                   (wrap_body_1): GraphModule()
                   (wr...
  File "/usr/local/lib/python3.10/dist-packages/torch/fx/graph_module.py", line 738, in call_wrapped
    return self._wrapped_call(self, *args, **kwargs)
           │    │             │      │       └ {}
           │    │             │      └ (FunctionalTensor(_to_functional_tensor(FakeTensor(..., device='cuda:0', size=(1, 32, 16, 96, 12), dtype=torch.float16),
           │    │             │            ...
           │    │             └ GraphModule(
           │    │                 (wrap_body_0): GraphModule()
           │    │                 (L__self___blocks_0_drop_path): Identity()
           │    │                 (wrap_body_1): GraphModule()
           │    │                 (wr...
           │    └ <torch.fx.graph_module._WrappedCall object at 0x7fca7b0fd090>
           └ GraphModule(
               (wrap_body_0): GraphModule()
               (L__self___blocks_0_drop_path): Identity()
               (wrap_body_1): GraphModule()
               (wr...
  File "/usr/local/lib/python3.10/dist-packages/torch/fx/graph_module.py", line 316, in __call__
    raise e
  File "/usr/local/lib/python3.10/dist-packages/torch/fx/graph_module.py", line 303, in __call__
    return super(self.cls, obj).__call__(*args, **kwargs)  # type: ignore[misc]
                 │    │    │              │       └ {}
                 │    │    │              └ (FunctionalTensor(_to_functional_tensor(FakeTensor(..., device='cuda:0', size=(1, 32, 16, 96, 12), dtype=torch.float16),
                 │    │    │                    ...
                 │    │    └ GraphModule(
                 │    │        (wrap_body_0): GraphModule()
                 │    │        (L__self___blocks_0_drop_path): Identity()
                 │    │        (wrap_body_1): GraphModule()
                 │    │        (wr...
                 │    └ <class 'torch.fx.graph_module.GraphModule.__new__.<locals>.GraphModuleImpl'>
                 └ <torch.fx.graph_module._WrappedCall object at 0x7fca7b0fd090>
  File "/usr/local/lib/python3.10/dist-packages/torch/fx/_symbolic_trace.py", line 800, in module_call_wrapper
    return self.call_module(mod, forward, args, kwargs)
           │    │           │    │        │     └ {}
           │    │           │    │        └ (FunctionalTensor(_to_functional_tensor(FakeTensor(..., device='cuda:0', size=(1, 32, 16, 96, 12), dtype=torch.float16),
           │    │           │    │              ...
           │    │           │    └ <function Tracer.trace.<locals>.module_call_wrapper.<locals>.forward at 0x7fca68ba3010>
           │    │           └ GraphModule(
           │    │               (wrap_body_0): GraphModule()
           │    │               (L__self___blocks_0_drop_path): Identity()
           │    │               (wrap_body_1): GraphModule()
           │    │               (wr...
           │    └ <function PythonKeyTracer.call_module at 0x7fcf69a2b910>
           └ <torch.fx.experimental.proxy_tensor.PythonKeyTracer object at 0x7fca7b0ff3a0>
  File "/usr/local/lib/python3.10/dist-packages/torch/fx/experimental/proxy_tensor.py", line 569, in call_module
    return forward(*args, **kwargs)
           │        │       └ {}
           │        └ (FunctionalTensor(_to_functional_tensor(FakeTensor(..., device='cuda:0', size=(1, 32, 16, 96, 12), dtype=torch.float16),
           │              ...
           └ <function Tracer.trace.<locals>.module_call_wrapper.<locals>.forward at 0x7fca68ba3010>
  File "/usr/local/lib/python3.10/dist-packages/torch/fx/_symbolic_trace.py", line 793, in forward
    return _orig_module_call(mod, *args, **kwargs)
           │                 │     │       └ {}
           │                 │     └ (FunctionalTensor(_to_functional_tensor(FakeTensor(..., device='cuda:0', size=(1, 32, 16, 96, 12), dtype=torch.float16),
           │                 │           ...
           │                 └ GraphModule(
           │                     (wrap_body_0): GraphModule()
           │                     (L__self___blocks_0_drop_path): Identity()
           │                     (wrap_body_1): GraphModule()
           │                     (wr...
           └ <function Module._wrapped_call_impl at 0x7fcf70797f40>
  File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1714, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
           │    │           │       └ {}
           │    │           └ (FunctionalTensor(_to_functional_tensor(FakeTensor(..., device='cuda:0', size=(1, 32, 16, 96, 12), dtype=torch.float16),
           │    │                 ...
           │    └ <function Module._call_impl at 0x7fcf707a0040>
           └ GraphModule(
               (wrap_body_0): GraphModule()
               (L__self___blocks_0_drop_path): Identity()
               (wrap_body_1): GraphModule()
               (wr...
  File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1725, in _call_impl
    return forward_call(*args, **kwargs)
           │             │       └ {}
           │             └ (FunctionalTensor(_to_functional_tensor(FakeTensor(..., device='cuda:0', size=(1, 32, 16, 96, 12), dtype=torch.float16),
           │                   ...
           └ <bound method forward of GraphModule(
               (wrap_body_0): GraphModule()
               (L__self___blocks_0_drop_path): Identity()
               (wrap_bod...
  File "<eval_with_key>.154", line 56, in forward
    tag_activation_checkpoint = torch._higher_order_ops.wrap.tag_activation_checkpoint(wrap_body_0, clone_default, l__self___blocks_0_norm1_weight, l__self___blocks_0_norm1_bias, l__self___blocks_0_attn_qkv_weight, l__self___blocks_0_attn_qkv_bias, l__self___blocks_0_attn_relative_position_index, l__self___blocks_0_attn_relative_position_bias_table, l__self___blocks_0_attn_proj_weight, l__self___blocks_0_attn_proj_bias, use_reentrant = False);  wrap_body_0 = l__self___blocks_0_norm1_weight = l__self___blocks_0_norm1_bias = l__self___blocks_0_attn_qkv_weight = l__self___blocks_0_attn_qkv_bias = l__self___blocks_0_attn_relative_position_index = l__self___blocks_0_attn_relative_position_bias_table = l__self___blocks_0_attn_proj_weight = l__self___blocks_0_attn_proj_bias = None
                                │     │                 │    │                         │            │              │                                │                              │                                   │                                 │                                                │                                                     │                                    │                                                           └ GraphModule()
                                │     │                 │    │                         │            │              │                                │                              │                                   │                                 │                                                │                                                     │                                    └ Parameter containing:
                                │     │                 │    │                         │            │              │                                │                              │                                   │                                 │                                                │                                                     │                                      tensor([ 0.1644, -0.2638,  0.2602, -0.1713, -0.2792, -0.0949, -0.2403, -0.2642,
                                │     │                 │    │                         │            │              │                                │                              │                                   │                                 │                                                │                                                     │                                               0.2230, -0.108...
                                │     │                 │    │                         │            │              │                                │                              │                                   │                                 │                                                │                                                     └ Parameter containing:
                                │     │                 │    │                         │            │              │                                │                              │                                   │                                 │                                                │                                                       tensor([[ 0.0116, -0.2867,  0.2646,  0.0244,  0.2220,  0.1929, -0.0747, -0.2665,
                                │     │                 │    │                         │            │              │                                │                              │                                   │                                 │                                                │                                                                 0.1278,  0.0...
                                │     │                 │    │                         │            │              │                                │                              │                                   │                                 │                                                └ Parameter containing:
                                │     │                 │    │                         │            │              │                                │                              │                                   │                                 │                                                  tensor([[-0.0065, -0.0188,  0.0264],
                                │     │                 │    │                         │            │              │                                │                              │                                   │                                 │                                                          [ 0.0031,  0.0165,  0.0040],
                                │     │                 │    │                         │            │              │                                │                              │                                   │                                 │                                                          [-0.0228, -0.0269, -0...
                                │     │                 │    │                         │            │              │                                │                              │                                   │                                 └ tensor([[1098, 1097, 1096,  ...,    2,    1,    0],
                                │     │                 │    │                         │            │              │                                │                              │                                   │                                           [1099, 1098, 1097,  ...,    3,    2,    1],
                                │     │                 │    │                         │            │              │                                │                              │                                   │                                           [1100, 1099, ...
                                │     │                 │    │                         │            │              │                                │                              │                                   └ Parameter containing:
                                │     │                 │    │                         │            │              │                                │                              │                                     tensor([ 0.0290,  0.1313,  0.0530, -0.1765, -0.0533, -0.0251, -0.1463,  0.2021,
                                │     │                 │    │                         │            │              │                                │                              │                                              0.0727,  0.158...
                                │     │                 │    │                         │            │              │                                │                              └ Parameter containing:
                                │     │                 │    │                         │            │              │                                │                                tensor([[-0.[1609](https://gitlab-master.nvidia.com/dl/jet/ci/-/jobs/109186699#L1609), -0.1888, -0.2067,  0.2749,  0.1639, -0.2214, -0.2880,  0.0123,
                                │     │                 │    │                         │            │              │                                │                                         -0.0276,  0.2...
                                │     │                 │    │                         │            │              │                                └ Parameter containing:
                                │     │                 │    │                         │            │              │                                  tensor([0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.], device='cuda:0',
                                │     │                 │    │                         │            │              │                                         requires_grad=True)
                                │     │                 │    │                         │            │              └ Parameter containing:
                                │     │                 │    │                         │            │                tensor([1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.], device='cuda:0',
                                │     │                 │    │                         │            │                       requires_grad=True)
                                │     │                 │    │                         │            └ FunctionalTensor(_to_functional_tensor(FakeTensor(..., device='cuda:0', size=(1, 32, 16, 96, 12), dtype=torch.float16),
                                │     │                 │    │                         │                   ...
                                │     │                 │    │                         └ GraphModule()
                                │     │                 │    └ <torch._higher_order_ops.wrap.TagActivationCheckpoint object at 0x7fcb40ff14b0>
                                │     │                 └ <module 'torch._higher_order_ops.wrap' from '/usr/local/lib/python3.10/dist-packages/torch/_higher_order_ops/wrap.py'>
                                │     └ <module 'torch._higher_order_ops' from '/usr/local/lib/python3.10/dist-packages/torch/_higher_order_ops/__init__.py'>
                                └ <module 'torch' from '/usr/local/lib/python3.10/dist-packages/torch/__init__.py'>
  File "/usr/local/lib/python3.10/dist-packages/torch/_higher_order_ops/wrap.py", line 206, in __call__
    return Interpreter(gmod).run(*args)
           │           │          └ (FunctionalTensor(_to_functional_tensor(FakeTensor(..., device='cuda:0', size=(1, 32, 16, 96, 12), dtype=torch.float16),
           │           │                ...
           │           └ GraphModule()
           └ <class 'torch.fx.interpreter.Interpreter'>
  File "/usr/local/lib/python3.10/dist-packages/torch/fx/interpreter.py", line 146, in run
    self.env[node] = self.run_node(node)
    │    │   │       │    │        └ getitem_3
    │    │   │       │    └ <function Interpreter.run_node at 0x7fcf69cf9750>
    │    │   │       └ <torch.fx.interpreter.Interpreter object at 0x7fca7b64f0a0>
    │    │   └ getitem_3
    │    └ {l__self___blocks_0_attn_relative_position_bias_table: Parameter containing:
    │      tensor([[-0.0065, -0.0188,  0.0264],
    │              [ 0...
    └ <torch.fx.interpreter.Interpreter object at 0x7fca7b64f0a0>
  File "/usr/local/lib/python3.10/dist-packages/torch/fx/interpreter.py", line 203, in run_node
    return getattr(self, n.op)(n.target, args, kwargs)
                   │     │ │   │ │       │     └ {}
                   │     │ │   │ │       └ (FakeTensor(..., device='cuda:0', size=(343, 343), dtype=torch.int64), (slice(None, 343, None), slice(None, 343, None)))
                   │     │ │   │ └ <built-in function getitem>
                   │     │ │   └ getitem_3
                   │     │ └ 'call_function'
                   │     └ getitem_3
                   └ <torch.fx.interpreter.Interpreter object at 0x7fca7b64f0a0>
  File "/usr/local/lib/python3.10/dist-packages/torch/fx/interpreter.py", line 275, in call_function
    return target(*args, **kwargs)
           │       │       └ {}
           │       └ (FakeTensor(..., device='cuda:0', size=(343, 343), dtype=torch.int64), (slice(None, 343, None), slice(None, 343, None)))
           └ <built-in function getitem>
  File "/usr/local/lib/python3.10/dist-packages/torch/fx/experimental/proxy_tensor.py", line 705, in __torch_function__
    return func(*args, **kwargs)
           │     │       └ {}
           │     └ (FakeTensor(..., device='cuda:0', size=(343, 343), dtype=torch.int64), (slice(None, 343, None), slice(None, 343, None)))
           └ <slot wrapper '__getitem__' of 'torch._C.TensorBase' objects>
RuntimeError: View operation returned a tensor that is the same as the input base tensor.  This is no longer allowed; you must explicitly create a new tensor (e.g., using .detach()). As a user, you could have made a mistake implementing __torch_dispatch__ or a Python operator decomposition or meta registration; if that's not the case, please report a bug to PyTorch or the backend you are using.
While executing %getitem_3 : [num_users=1] = call_function[target=operator.getitem](args = (%clone, (slice(None, 343, None), slice(None, 343, None))), kwargs = {})
Original traceback:
  File "/usr/local/lib/python3.10/dist-packages/monai/networks/nets/swin_unetr.py", line 903, in torch_dynamo_resume_in_forward_at_900
    x = blk(x, attn_mask)
  File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1725, in _call_impl
    return forward_call(*args, **kwargs)
  File "/usr/local/lib/python3.10/dist-packages/monai/networks/nets/swin_unetr.py", line 696, in forward
    x = checkpoint.checkpoint(self.forward_part1, x, mask_matrix, use_reentrant=False)
  File "/usr/local/lib/python3.10/dist-packages/monai/networks/nets/swin_unetr.py", line 636, in forward_part1
    attn_windows = self.attn(x_windows, mask=attn_mask)
  File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1725, in _call_impl
    return forward_call(*args, **kwargs)
  File "/usr/local/lib/python3.10/dist-packages/monai/networks/nets/swin_unetr.py", line 522, in forward
    self.relative_position_index.clone()[:n, :n].reshape(-1)
2024-08-31 04:23:44.949 | INFO     | MainProcess | /usr/local/lib/python3.10/dist-packages/model_navigator/pipelines/pipeline.py:128 - Attempting to use FunctionalTensor on its own. Instead, please use it with a corresponding FunctionalTensorMode()
While executing %linear : [num_users=1] = call_function[target=torch._C._nn.linear](args = (%view_1, %l__self___blocks_0_attn_qkv_weight, %l__self___blocks_0_attn_qkv_bias), kwargs = {})
Original traceback:
  File "/usr/local/lib/python3.10/dist-packages/monai/networks/nets/swin_unetr.py", line 903, in torch_dynamo_resume_in_forward_at_900
    x = blk(x, attn_mask)
  File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1725, in _call_impl
    return forward_call(*args, **kwargs)
  File "/usr/local/lib/python3.10/dist-packages/monai/networks/nets/swin_unetr.py", line 696, in forward
    x = checkpoint.checkpoint(self.forward_part1, x, mask_matrix, use_reentrant=False)
  File "/usr/local/lib/python3.10/dist-packages/monai/networks/nets/swin_unetr.py", line 636, in forward_part1
    attn_windows = self.attn(x_windows, mask=attn_mask)
  File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1725, in _call_impl
    return forward_call(*args, **kwargs)
  File "/usr/local/lib/python3.10/dist-packages/monai/networks/nets/swin_unetr.py", line 517, in forward
    qkv = self.qkv(x).reshape(b, n, 3, self.num_heads, c // self.num_heads).permute(2, 0, 3, 1, 4)
  File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1725, in _call_impl
    return forward_call(*args, **kwargs)
  File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/linear.py", line 125, in forward
    return F.linear(input, self.weight, self.bias)
2024-08-31 04:23:44.950 | WARNING  | MainProcess | /usr/local/lib/python3.10/dist-packages/model_navigator/pipelines/pipeline.py:131 - Command finished with ModelNavigatorUserInputError. The error is considered as external error. Usually caused by incompatibilities between the model and the target formats and/or runtimes. Please review the command output.
Traceback (most recent call last):
  File "/usr/local/lib/python3.10/dist-packages/model_navigator/commands/execution_context.py", line 156, in _execute_function
    fire.Fire(func, unwrapped_args)
  File "/usr/local/lib/python3.10/dist-packages/fire/core.py", line 141, in Fire
    component_trace = _Fire(component, args, parsed_flag_args, context, name)
  File "/usr/local/lib/python3.10/dist-packages/fire/core.py", line 466, in _Fire
    component, remaining_args = _CallAndUpdateTrace(
  File "/usr/local/lib/python3.10/dist-packages/fire/core.py", line 681, in _CallAndUpdateTrace
    component = fn(*varargs, **kwargs)
  File "/usr/local/lib/python3.10/dist-packages/model_navigator/commands/correctness/correctness_script.py", line 93, in correctness
    comp_output = runner.infer(sample)
  File "/usr/local/lib/python3.10/dist-packages/model_navigator/runners/base.py", line 325, in infer
    output = self.infer_impl(feed_dict, *args, **kwargs)
  File "/usr/local/lib/python3.10/dist-packages/model_navigator/runners/torch.py", line 94, in infer_impl
    outputs = self._infer(feed_dict=feed_dict)
  File "/usr/local/lib/python3.10/dist-packages/model_navigator/runners/torch.py", line 135, in _infer_v1
    outputs = self._loaded_model(*args, **kwargs)
  File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line [1714](https://gitlab-master.nvidia.com/dl/jet/ci/-/jobs/109186699#L1714), in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1725, in _call_impl
    return forward_call(*args, **kwargs)
  File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/eval_frame.py", line 434, in _fn
    return fn(*args, **kwargs)
  File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1714, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1725, in _call_impl
    return forward_call(*args, **kwargs)
  File "/usr/local/lib/python3.10/dist-packages/monai/networks/nets/swin_unetr.py", line 324, in forward
    self._check_input_size(x_in.shape[2:])
  File "/usr/local/lib/python3.10/dist-packages/monai/networks/nets/swin_unetr.py", line 325, in torch_dynamo_resume_in_forward_at_324
    hidden_states_out = self.swinViT(x_in, self.normalize)
  File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1714, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line [1725](https://gitlab-master.nvidia.com/dl/jet/ci/-/jobs/109186699#L1725), in _call_impl
    return forward_call(*args, **kwargs)
  File "/usr/local/lib/python3.10/dist-packages/monai/networks/nets/swin_unetr.py", line 1067, in forward
    x1 = self.layers1[0](x0.contiguous())
  File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1714, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1725, in _call_impl
    return forward_call(*args, **kwargs)
  File "/usr/local/lib/python3.10/dist-packages/monai/networks/nets/swin_unetr.py", line 898, in forward
    dp = int(np.ceil(d / window_size[0])) * window_size[0]
  File "/usr/local/lib/python3.10/dist-packages/monai/networks/nets/swin_unetr.py", line 899, in torch_dynamo_resume_in_forward_at_898
    hp = int(np.ceil(h / window_size[1])) * window_size[1]
  File "/usr/local/lib/python3.10/dist-packages/monai/networks/nets/swin_unetr.py", line 900, in torch_dynamo_resume_in_forward_at_899
    wp = int(np.ceil(w / window_size[2])) * window_size[2]
  File "/usr/local/lib/python3.10/dist-packages/monai/networks/nets/swin_unetr.py", line 900, in torch_dynamo_resume_in_forward_at_900
    wp = int(np.ceil(w / window_size[2])) * window_size[2]
  File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1714, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1725, in _call_impl
    return forward_call(*args, **kwargs)
  File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/eval_frame.py", line 601, in _fn
    return fn(*args, **kwargs)
  File "/usr/local/lib/python3.10/dist-packages/torch/fx/graph_module.py", line 738, in call_wrapped
    return self._wrapped_call(self, *args, **kwargs)
  File "/usr/local/lib/python3.10/dist-packages/torch/fx/graph_module.py", line 316, in __call__
    raise e
  File "/usr/local/lib/python3.10/dist-packages/torch/fx/graph_module.py", line 303, in __call__
    return super(self.cls, obj).__call__(*args, **kwargs)  # type: ignore[misc]
  File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1714, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1725, in _call_impl
    return forward_call(*args, **kwargs)
  File "<eval_with_key>.154", line 56, in forward
    tag_activation_checkpoint = torch._higher_order_ops.wrap.tag_activation_checkpoint(wrap_body_0, clone_default, l__self___blocks_0_norm1_weight, l__self___blocks_0_norm1_bias, l__self___blocks_0_attn_qkv_weight, l__self___blocks_0_attn_qkv_bias, l__self___blocks_0_attn_relative_position_index, l__self___blocks_0_attn_relative_position_bias_table, l__self___blocks_0_attn_proj_weight, l__self___blocks_0_attn_proj_bias, use_reentrant = False);  wrap_body_0 = l__self___blocks_0_norm1_weight = l__self___blocks_0_norm1_bias = l__self___blocks_0_attn_qkv_weight = l__self___blocks_0_attn_qkv_bias = l__self___blocks_0_attn_relative_position_index = l__self___blocks_0_attn_relative_position_bias_table = l__self___blocks_0_attn_proj_weight = l__self___blocks_0_attn_proj_bias = None
  File "/usr/local/lib/python3.10/dist-packages/torch/_higher_order_ops/wrap.py", line 206, in __call__
    return Interpreter(gmod).run(*args)
  File "/usr/local/lib/python3.10/dist-packages/torch/fx/interpreter.py", line 146, in run
    self.env[node] = self.run_node(node)
  File "/usr/local/lib/python3.10/dist-packages/torch/fx/interpreter.py", line 203, in run_node
    return getattr(self, n.op)(n.target, args, kwargs)
  File "/usr/local/lib/python3.10/dist-packages/torch/fx/interpreter.py", line 275, in call_function
    return target(*args, **kwargs)
  File "/usr/local/lib/python3.10/dist-packages/torch/_subclasses/functional_tensor.py", line 197, in __torch_dispatch__
    raise RuntimeError(
RuntimeError: Attempting to use FunctionalTensor on its own. Instead, please use it with a corresponding FunctionalTensorMode()
While executing %linear : [num_users=1] = call_function[target=torch._C._nn.linear](args = (%view_1, %l__self___blocks_0_attn_qkv_weight, %l__self___blocks_0_attn_qkv_bias), kwargs = {})
Original traceback:
  File "/usr/local/lib/python3.10/dist-packages/monai/networks/nets/swin_unetr.py", line 903, in torch_dynamo_resume_in_forward_at_900
    x = blk(x, attn_mask)
  File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1725, in _call_impl
    return forward_call(*args, **kwargs)
  File "/usr/local/lib/python3.10/dist-packages/monai/networks/nets/swin_unetr.py", line 696, in forward
    x = checkpoint.checkpoint(self.forward_part1, x, mask_matrix, use_reentrant=False)
  File "/usr/local/lib/python3.10/dist-packages/monai/networks/nets/swin_unetr.py", line 636, in forward_part1
    attn_windows = self.attn(x_windows, mask=attn_mask)
  File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1725, in _call_impl
    return forward_call(*args, **kwargs)
  File "/usr/local/lib/python3.10/dist-packages/monai/networks/nets/swin_unetr.py", line 517, in forward
    qkv = self.qkv(x).reshape(b, n, 3, self.num_heads, c // self.num_heads).permute(2, 0, 3, 1, 4)
  File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1725, in _call_impl
    return forward_call(*args, **kwargs)
  File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/linear.py", line 125, in forward
    return F.linear(input, self.weight, self.bias)
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
  File "/usr/local/lib/python3.10/dist-packages/model_navigator/pipelines/pipeline.py", line 121, in _execute_unit
    command_output = execution_unit.command().run(
  File "/usr/local/lib/python3.10/dist-packages/model_navigator/commands/base.py", line 127, in run
    output = self._run(*args, **_filter_dict_for_func(kwargs, self._run))
  File "/usr/local/lib/python3.10/dist-packages/model_navigator/commands/correctness/correctness.py", line 150, in _run
    context.execute_python_script(
  File "/usr/local/lib/python3.10/dist-packages/model_navigator/commands/execution_context.py", line 142, in execute_python_script
    self._execute_function(func, unwrapped_args, allow_failure, cmd)
  File "/usr/local/lib/python3.10/dist-packages/model_navigator/commands/execution_context.py", line 168, in _execute_function
    raise ModelNavigatorUserInputError(cmd_to_reproduce_error) from e
model_navigator.exceptions.ModelNavigatorUserInputError: Command to reproduce error: /bin/bash torch/reproduce_correctness-torchtensorrtcompilerunner.sh
monai.networks.nets.swin_unetr.SwinUNETR: Validating model torch on 
TorchTensorRTCompile backend FAIL
2024-08-31 04:23:44.952 | INFO     | MainProcess | /usr/local/lib/python3.10/dist-packages/model_navigator/pipelines/pipeline.py:148 - Execution time: 15.24[s]
@chohk88 chohk88 changed the title [Coverage] Runtime error related liear and slice op [Coverage] Runtime error related linear, reshape or slice ops Sep 26, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant