-
Notifications
You must be signed in to change notification settings - Fork 164
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
多输入到底是怎么指定的,按照指定输入格式解析不出来 #206
Comments
你好哥们,貌似在onnx转milr的情况上它不支持动态输入,比如我的模型是float32[DynamicDimension.0,8,3,320,320],我只能把他更改他的输入成[1,8,3,320,320],其中的1代替掉动态输入 |
然后吧 估计这就是为什么bmodel的案例中,输入批次都是固定的,比如说bitch=1,bitch=4,我怀疑tpu就不支持动态输入 |
包括tpu-mlir都没有说怎么指定onnx输入为动态 |
|
你的
这部分 我翻过源码,我看到他的参数是支持的,但是貌似不是在onnx到mlir上用的 |
或者
你考虑一下在你的参数后面加上 --dynamic 打开动态输入 |
但是我觉得大概率会寄,因为我那时候的模型的输入是 |
model_transform
--model_name transformer
--model_def ./transformer_layer_0.onnx
--dynamic_shape_input_names 'seq_len', 'sum_leq_len', 'last_seq_len'
--input_shapes [['seq_len',1,4096],[1,1,'seq_len','sum_leq_len'],['seq_len',1,32,2], ['last_seq_len',1,2,128], ['last_seq_len',1,2,128]]
--mlir transformer.mlir
The text was updated successfully, but these errors were encountered: