Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Internal Error ((Unnamed Layer* 15) [Slice]: ISliceLayer size elements must be non-negative, but size on axis 0 is not when running builder->buildSerializedNetwork(*network, *config) on GPU 3070 #3480

Open
Richard-Shasen opened this issue Nov 21, 2023 · 7 comments
Assignees
Labels
triaged Issue has been triaged by maintainers

Comments

@Richard-Shasen
Copy link

Description

Environment

TensorRT Version:TensorRT 8.6

NVIDIA GPU:3070

NVIDIA Driver Version:511.23

CUDA Version:11.6

CUDNN Version:

Operating System:windows11

Python Version (if applicable):

Tensorflow Version (if applicable):

PyTorch Version (if applicable):

Baremetal or Container (if so, version):

Relevant Files

Model link:

Steps To Reproduce

Commands or scripts:

Have you tried the latest release?:

Can this model run on other frameworks? For example run ONNX model with ONNXRuntime (polygraphy run <model.onnx> --onnxrt):

@zerollzeng
Copy link
Collaborator

We don't support negative slice size? @ttyio

@zerollzeng zerollzeng self-assigned this Nov 22, 2023
@zerollzeng zerollzeng added the triaged Issue has been triaged by maintainers label Nov 22, 2023
@Richard-Shasen
Copy link
Author

Richard-Shasen commented Nov 22, 2023 via email

@ttyio
Copy link
Collaborator

ttyio commented Nov 23, 2023

Hi @Richard-Shasen , we support dynamic shape for all the slice input, start, stride and size.

To use dynamic shape in trt, we should specify the dynamic shape dim to -1. From your error message it says negative size, do you happen to use other shape dims that smaller than -1? Thanks!

@Richard-Shasen
Copy link
Author

Hi @Richard-Shasen , we support dynamic shape for all the slice input, start, stride and size.

To use dynamic shape in trt, we should specify the dynamic shape dim to -1. From your error message it says negative size, do you happen to use other shape dims that smaller than -1? Thanks!

Thank You! @ttyio I use function getDimsions() to print the shape of slice layer,it is{-1,32,160,160,0,0,0,0},they are not smaller than -1.

@ttyio
Copy link
Collaborator

ttyio commented Nov 23, 2023

@Richard-Shasen this batch dim -1 is not directly from network input, right? could you share the op between network input and this slice layer? thanks!

@Richard-Shasen
Copy link
Author

@Richard-Shasen this batch dim -1 is not directly from network input, right? could you share the op between network input and this slice layer? thanks!

yes!-1 is computed from other layer.
The input layer:
nvinfer1::ITensor* data = network->addInput(kInputTensorName, dt, nvinfer1::Dims{4,{-1,3,kInputH,kInputW}});

the profile:
nvinfer1::IOptimizationProfile* profile = builder->createOptimizationProfile();
profile->setDimensions(
kInputTensorName, nvinfer1::OptProfileSelector::kMIN, nvinfer1::Dims{ 4,{ 1,3,640,640 } });
profile->setDimensions(
kInputTensorName, nvinfer1::OptProfileSelector::kOPT, nvinfer1::Dims{ 4,{ 7,3,640,640 } });
profile->setDimensions(
kInputTensorName, nvinfer1::OptProfileSelector::kMAX, nvinfer1::Dims{ 4,{ 14,3,640,640 } });
config->addOptimizationProfile(profile);

middle network:
nvinfer1::IElementWiseLayer* conv0 = convBnSiLU(network, weightMap, data, 32, 3, 2, 1, "model.0");
nvinfer1::Dims d00 = conv0->getOutput(0)->getDimensions();
nvinfer1::IElementWiseLayer
conv1 = convBnSiLU(network, weightMap, conv0->getOutput(0), 64, 3, 2, 1, "model.1");
nvinfer1::Dims d01 = conv1->getOutput(0)->getDimensions();
nvinfer1::IElementWiseLayer
conv2 = C2F(network, profile, weightMap, *conv1->getOutput(0), 64, 64, 1, true, 0.5, "model.2");

C2F module:
nvinfer1::IElementWiseLayer* C2F(nvinfer1::INetworkDefinition* network, nvinfer1::IOptimizationProfile* profile,
std::map<std::string, nvinfer1::Weights> weightMap, nvinfer1::ITensor& input, int c1, int c2, int n, bool shortcut,
float e, const std::string& lname)
{
int c_ = (float)c2 * e;
nvinfer1::Dims in01 = input.getDimensions();
nvinfer1::IElementWiseLayer* conv1 = convBnSiLU(network, weightMap, input, 2* c_, 1, 1, 0, lname+".cv1");
std::cout << conv1->getName()<<std::endl;
nvinfer1::Dims d = conv1->getOutput(0)->getDimensions();
int32_t subSliceValue[4] = { 0,d.d[1] / 2,0,0 };
nvinfer1::ISliceLayer* split1 = dynamicSliceLayer(network, *conv1->getOutput(0),
nvinfer1::Dims{ 4,{ 0,0,0,0 } }, subSliceValue, 4, nvinfer1::Dims{ 4,{1,1,1,1} });
std::cout << split1->getName()<<std::endl;

I define a function to slice tensor dynamically:
nvinfer1::ISliceLayer* dynamicSliceLayer(nvinfer1::INetworkDefinition* network, nvinfer1::ITensor& input,
nvinfer1::Dims start, const int32_t subSliceValue[], int32_t size, nvinfer1::Dims stride)
{
//start shape tensor
int32_t* startValue = start.d;
nvinfer1::Weights startWeight{ nvinfer1::DataType::kINT32,startValue,start.nbDims };
nvinfer1::IConstantLayer* startConstLayer = network->addConstant(
nvinfer1::Dims{ 1,{start.nbDims} }, startWeight);
//stride shape tensor
int32_t* strideValue = stride.d;
nvinfer1::Weights strideWeight{ nvinfer1::DataType::kINT32,strideValue,stride.nbDims };
nvinfer1::IConstantLayer* strideConstLayer = network->addConstant(
nvinfer1::Dims{ 1,{stride.nbDims} }, strideWeight);

nvinfer1::Weights subSliceWeight{ nvinfer1::DataType::kINT32,subSliceValue,size };
nvinfer1::IConstantLayer* subSliceConstLayer = network->addConstant(
nvinfer1::Dims{ 1,{size} },subSliceWeight);
std::cout << subSliceConstLayer->getName() << std::endl;
nvinfer1::IShapeLayer* inputShapeLayer = network->addShape(input);
std::cout << inputShapeLayer->getName() << std::endl;
nvinfer1::IElementWiseLayer* sliceResultShapeLayer = network->addElementWise(*inputShapeLayer->getOutput(0),
subSliceConstLayer->getOutput(0), nvinfer1::ElementWiseOperation::kSUB);
std::cout << sliceResultShapeLayer->getName() << std::endl;
nvinfer1::ISliceLayer
slice = network->addSlice(input,nvinfer1::Dims{start.nbDims,{0}},
nvinfer1::Dims{size,{0}}, nvinfer1::Dims{stride.nbDims,{0}});
std::cout << slice->getName() << std::endl;
slice->setInput(0, input);
slice->setInput(1, *startConstLayer->getOutput(0));
slice->setInput(2, *sliceResultShapeLayer->getOutput(0));
slice->setInput(3, *strideConstLayer->getOutput(0));
nvinfer1::Dims sl_d = slice->getOutput(0)->getDimensions();
return slice;
}

so,-1 is computed from network->addElementWise()

@Lemonononon
Copy link

I have the same problem, have you solved it?
@ttyio @zerollzeng @Richard-Shasen Thanks!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
triaged Issue has been triaged by maintainers
Projects
None yet
Development

No branches or pull requests

4 participants