We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
It would be great if you could choose which "back end" to use for processing - at the moment it seems it relies on the host to have a dGPU.
However, cloud GPU platforms like Modal allow to massively speed up the transcription process by spinning up hundreds of containers in parallel.
Something like https://github.com/modal-labs/modal-examples/tree/26c911ba880a1311e748c6b01f911d065aed4cc4/06_gpu_and_ml/whisper_pod_transcriber , where the API facade remains the same, but the work queue chunks the big audio files, and gives that to Modal containers to process.
The text was updated successfully, but these errors were encountered:
@auduny is this something you could look at?
Sorry, something went wrong.
That's really not a use case for us atm, but feel free to open a PR with this feature.
auduny
No branches or pull requests
Describe the feature you'd like to request
It would be great if you could choose which "back end" to use for processing - at the moment it seems it relies on the host to have a dGPU.
However, cloud GPU platforms like Modal allow to massively speed up the transcription process by spinning up hundreds of containers in parallel.
Describe the solution you'd like
Something like https://github.com/modal-labs/modal-examples/tree/26c911ba880a1311e748c6b01f911d065aed4cc4/06_gpu_and_ml/whisper_pod_transcriber , where the API facade remains the same, but the work queue chunks the big audio files, and gives that to Modal containers to process.
The text was updated successfully, but these errors were encountered: