Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

remove 'providers' option for melspec and embedding InferenceSession #27

Closed
fquirin opened this issue May 21, 2023 · 3 comments
Closed

Comments

@fquirin
Copy link

fquirin commented May 21, 2023

Ty for this very interesting open-source wake-word project! I've done some experiments for SEPIA open assistant framework and it looks like it could finally become a real alternative to Porcupine 🙂 (with the exception of custom ww creation for now ^^).

Just a small comment. I'm getting warnings because of 'CUDAExecutionProvider' ('CUDAExecutionProvider' is not in available provider names).
I think you can simply remove the providers option in the utils class because ONNX runtime will set this to the available providers automatically.

Cu,
Florian

@dscripka
Copy link
Owner

Thanks, I'm glad you've found the library useful! That's an excellent point about the ONNX providers, and at minimum I can set the default to be the CPU provider so most people are less likely to get this warning. I'll plan to make this change in an upcoming release.

As for custom models, you can already train custom wakeword models now (see an example notebook here), though I recognize that the process is a bit complex. I'm hoping to make the custom model training process much simpler in the future.

@fquirin
Copy link
Author

fquirin commented May 25, 2023

Looking forward to it 👍.

I spent some time to try and find out if I could transport the runtime over to ONNX web, but stopped for now when I realized there are actually 3 ONNX models involved ^^. I'll keep this on my to-do list though since my SEPIA clients are mostly running as web-apps (even the headless one).
Do you have any plans to work on that as well? :-)

@dscripka
Copy link
Owner

This issue will be fixed when #31 is merged, so closing.

And yes, there are three separate models, which I admit is a bit unwieldy when porting to other deployment environments. I have considered making a javascript library before, but it would take some time as much of the functionality in the Python library would need to be ported as well.

Feel free to open a Discussion topic for this though, happy to continue the conversation there.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants