You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Describe the bug A clear and concise description of what the bug is.
Converter doesn't convert Whisper model to ONNX.
Converter doesn't work for non *.en mpdels
How to reproduce Steps or a minimal working example to reproduce the behavior
python ./scripts/convert.py --model_id openai/whisper-tiny --from_hub --quantize --task speech2seq-lm-with-past
result:
Merging decoders
Traceback (most recent call last):
File "D:\Users\Dimq1\source\OpenAI\transformers.js\scripts\convert.py", line 301, in
main()
File "D:\Users\Dimq1\source\OpenAI\transformers.js\scripts\convert.py", line 293, in main
merge_decoders(
File "C:\Users\Dimq1\AppData\Local\Programs\Python\Python310\lib\site-packages\optimum\onnx\graph_transformations.py", line 135, in merge_decoders
_unify_onnx_outputs(decoder, decoder_with_past)
File "C:\Users\Dimq1\AppData\Local\Programs\Python\Python310\lib\site-packages\optimum\onnx\transformations_utils.py", line 147, in _unify_onnx_outputs
_check_num_outputs(model1, model2)
File "C:\Users\Dimq1\AppData\Local\Programs\Python\Python310\lib\site-packages\optimum\onnx\transformations_utils.py", line 136, in _check_num_outputs
raise ValueError(
ValueError: Two model protos need to have the same outputs. But one has 18 outputs while the other has 10 outputs.
PS D:\Users\Dimq1\source\OpenAI\transformers.js>
Expected behavior A clear and concise description of what you expected to happen.
Logs/screenshots If applicable, add logs/screenshots to help explain your problem.
Environment
Transformers.js version:
Browser (if applicable):
Operating system (if applicable):
Other:
Additional context Add any other context about the problem here.
The text was updated successfully, but these errors were encountered:
Describe the bug
A clear and concise description of what the bug is.
Converter doesn't convert Whisper model to ONNX.
Converter doesn't work for non *.en mpdels
How to reproduce
Steps or a minimal working example to reproduce the behavior
python ./scripts/convert.py --model_id openai/whisper-tiny --from_hub --quantize --task speech2seq-lm-with-past
result:
Merging decoders
Traceback (most recent call last):
File "D:\Users\Dimq1\source\OpenAI\transformers.js\scripts\convert.py", line 301, in
main()
File "D:\Users\Dimq1\source\OpenAI\transformers.js\scripts\convert.py", line 293, in main
merge_decoders(
File "C:\Users\Dimq1\AppData\Local\Programs\Python\Python310\lib\site-packages\optimum\onnx\graph_transformations.py", line 135, in merge_decoders
_unify_onnx_outputs(decoder, decoder_with_past)
File "C:\Users\Dimq1\AppData\Local\Programs\Python\Python310\lib\site-packages\optimum\onnx\transformations_utils.py", line 147, in _unify_onnx_outputs
_check_num_outputs(model1, model2)
File "C:\Users\Dimq1\AppData\Local\Programs\Python\Python310\lib\site-packages\optimum\onnx\transformations_utils.py", line 136, in _check_num_outputs
raise ValueError(
ValueError: Two model protos need to have the same outputs. But one has 18 outputs while the other has 10 outputs.
PS D:\Users\Dimq1\source\OpenAI\transformers.js>
Expected behavior
A clear and concise description of what you expected to happen.
Logs/screenshots
If applicable, add logs/screenshots to help explain your problem.
Environment
Additional context
Add any other context about the problem here.
The text was updated successfully, but these errors were encountered: