CreateTranscriptionTextAsync on real time (stream) #234
-
Hello! I am using CreateTranscriptionTextAsync to make STT - TTS with Whisper API and get an audio response, but I noticed that this is the slowest part of the process. It takes around 2-3 seconds to process. Is there a way (like stream chat and stream audio) to make this iteration process faster with the API? Thanks! |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment
-
I don't believe OpenAI gives us a streaming version of this endpoint. You can chop up your audio and send parts when there is silence and process each in batches this way to get faster responses. |
Beta Was this translation helpful? Give feedback.
I don't believe OpenAI gives us a streaming version of this endpoint. You can chop up your audio and send parts when there is silence and process each in batches this way to get faster responses.