Releases: Purfview/whisper-standalone-win
Faster-Whisper-XXL r239.1
Standalone Faster-Whisper implementation using optimized CTranslate2 models.
Includes all Standalone Faster-Whisper features + some additional ones. Read here.
Last included commit: #239
Faster-Whisper-XXL includes all needed libs.
Some new stuff in r239.1:
New features/args:
--batched
--unmerged
--batch_size
--multilingual
--hotwords
--rehot
--ignore_dupe_prompt
New vad_method
's:
silero_v5
silero_v4_fw
silero_v5_fw
Removed not useful experimental options:
no_speech_strict_lvl
nullify_non_speech
prompt_max
reprompt's first option
Link to the changelog.
Faster-Whisper r192.3
Standalone Faster-Whisper implementation using optimized CTranslate2 models.
GPU execution requires cuBLAS and cuDNN 8.x libs for CUDA v11.x .
Last included commit: #192
Note:
This release branch is deprecated, use Faster-Whisper-XXL.
Link to the changelog.
Whisper-OpenAI r150
Standalone Whisper.
Last included commit: #150.
Whisper-OpenAI includes all needed libs.
cuBLAS and cuDNN
Place libs in the same folder where Faster-Whisper executable is. Or to:
Windows: To System32
dir.
Linux: To dir in LD_LIBRARY_PATH
env.
.7z
vs .zip
- archives contain same files.
CUDA11_v2
is the last with support for GPUs with Kepler chip.
CUDA11_v2: - cuBLAS.and.cuDNN____v11.11.3.6__v8.7.0.84
CUDA11_v3: - cuBLAS.and.cuDNN____v11.11.3.6__v8.9.6.50
CUDA11_v4: - cuBLAS.and.cuDNN____v11.11.3.6__v8.9.7.29
CUDA12_v1: - cuBLAS.and.cuDNN____v12.4.5.8___v8.9.7.29