From 0ea6c197c32fff3c31cba9d667e8f82434a66f84 Mon Sep 17 00:00:00 2001 From: Russ d'Sa Date: Fri, 4 Aug 2023 09:56:18 -0700 Subject: [PATCH] Update README.md --- examples/whisper/README.md | 6 ++---- 1 file changed, 2 insertions(+), 4 deletions(-) diff --git a/examples/whisper/README.md b/examples/whisper/README.md index e8cc3d9a..177d9686 100644 --- a/examples/whisper/README.md +++ b/examples/whisper/README.md @@ -1,8 +1,6 @@ ## Whisper example -Whisper is not really suited for realtime applications. -The input requires to have 30s of data. -The way we can workaround this is by filling our data using silence +[Whisper](https://github.com/openai/whisper) is a speech-to-text model from OpenAI. It ordinarily requires 30s of input data for transcription, making it challenging to use in real-time applications. We work around this by limitation by padding shorter bursts of speech with silent audio packets. ## Run the demo @@ -24,4 +22,4 @@ g++ -O3 -std=c++11 -pthread --shared -fPIC -static-libstdc++ whisper.cpp ggml.o Run the script and connect another participant with a microphone: You can use our Meet example or use the livekit-cli: -e.g: `livekit-cli load-test --room yourroom --audio-publishers 1` \ No newline at end of file +e.g: `livekit-cli load-test --room yourroom --audio-publishers 1`