Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

🐛 [Bug] Segmentation fault when using int8 precision #2018

Closed
AhmetHamzaEmra opened this issue Jun 13, 2023 · 4 comments
Closed

🐛 [Bug] Segmentation fault when using int8 precision #2018

AhmetHamzaEmra opened this issue Jun 13, 2023 · 4 comments
Assignees
Labels
bug Something isn't working component: quantization Issues re: Quantization No Activity

Comments

@AhmetHamzaEmra
Copy link

Bug Description

tensorrt cannot work with int8 precision when using on NanoGPT like (bark text-to-audio) model.

To Reproduce

Steps to reproduce the behavior:

inp1, inp2 = get_sample_input()

inp1 = torch.unsqueeze(inp1, 0)

traced_model = torch.jit.trace(model, example_inputs=[inp1, inp2])

batch_size = 1

trt_model = torch_tensorrt.compile(
    traced_model,
    inputs = [  torch_tensorrt.Input((batch_size,1), dtype=torch.long), 
                torch_tensorrt.Input((batch_size, 1024, 8), dtype=torch.long)],
    enabled_precisions = { torch.int8},
    workspace_size=20000000000,
    truncate_long_and_double = True
)

and I am getting following error:

Segmentation fault

Expected behavior

When I run the same code on float32 or half precion everyting works fine. only happens when its int8

Environment

Build information about Torch-TensorRT can be found by turning on debug messages

  • Torch-TensorRT Version (e.g. 1.0.0): '1.4.0'
  • PyTorch Version (e.g. 1.0): 2.0.1+cu117
  • CPU Architecture: x64
  • OS (e.g., Linux): WSL
  • How you installed PyTorch (conda, pip, libtorch, source): pip
  • Build command you used (if compiling from source):
  • Are you using local sources or building from archives: Local
  • Python version: Python 3.10.6
  • CUDA version: 11.5
  • GPU models and configuration: 4090
  • Any other relevant information:

Additional context

@SongDabao
Copy link

I have the same issue, any update in this?

@AhmetHamzaEmra
Copy link
Author

I am waiting for an update too 😅

@github-actions
Copy link

github-actions bot commented Oct 5, 2023

This issue has not seen activity for 90 days, Remove stale label or comment or this will be closed in 10 days

@AhmetHamzaEmra
Copy link
Author

Why are you guys keep closing the issue? We are still waitign for an update ?????

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working component: quantization Issues re: Quantization No Activity
Projects
None yet
Development

No branches or pull requests

4 participants