Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

'TensorRT_model.pb' model's size is twice as large as the original pb model #15

Open
Julyliying opened this issue May 29, 2019 · 5 comments

Comments

@Julyliying
Copy link

Hi,
when i use tensorrt to generate trt model, i found the FP16 or INT8 trt model's size is twice as large as the original tensorflow pb model, i think it's wired, and do you know why?
my running environment is:
GPU 1080Ti, CUDA10.0, cudann7.5, Tensorflow1.13, TensorRT5.0
thanks

@Julyliying
Copy link
Author

Does anyone know what is wrong?

@ruyijidan
Copy link

my FP16 trt model is 653.5M , I guess it‘s OK,because at GPU 970M the PFS from 10 to 20

@ruyijidan
Copy link

can you see the trt model fps?if you can please tell me the detail version ty!!
cudann7.5.?, Tensorflow1.13.?, TensorRT5.0.?

@Julyliying
Copy link
Author

GPU 1080Ti, CUDA10.0, cudann7.5, Tensorflow1.13, TensorRT5.0. @ruyijidan

@ruyijidan
Copy link

@Julyliying ty!but little obvious effect at 1080ti,
cudnn7.5.1.10 Tensorflow1.13,.1 TensorRT5.1.5.0

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants