-
-
Notifications
You must be signed in to change notification settings - Fork 8
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[BUG] cannot use alternative faster-whisper models #11
Comments
Thanks for opening your first issue here! Be sure to follow the relevant issue templates, or risk having this issue marked as invalid. |
I suspect this is limitation of the upstream wyoming_faster_whisper project that we ingest. |
Should I open a bug there? |
It's definitely worth asking the question; if they do support custom models and we need to change something to support that, we certainly can. |
This should now be in place upstream and will be built shortly https://github.com/rhasspy/wyoming-faster-whisper/releases/tag/v2.0.0 |
Is there an existing issue for this?
Current Behavior
When I add my own faster-whisper ct2 model and corresponding ENV name, I get error message that only tiny, base, small and medium int8 models can be used.
After renaming my model and ENV to medium-int8 and restarting container I get this error:
WARNING:wyoming_faster_whisper.download:Model hashes do not match
Moreover, container starts a download and erased my model without a permission:
INFO:main:Downloading FasterWhisperModel.MEDIUM_INT8 to /config
Expected Behavior
Steps To Reproduce
Copy your custom model to config dir.
Start a container.
Environment
CPU architecture
x86-64
Docker creation
Container logs
The text was updated successfully, but these errors were encountered: