From c3e7b50916c73bd1cabc93c836c1a228e49b4946 Mon Sep 17 00:00:00 2001 From: Vaibhav Srivastav Date: Tue, 21 May 2024 13:14:34 +0200 Subject: [PATCH] [doc] Add references to the fine-tuning blog and distil-whisper to Whisper doc. --- docs/source/en/model_doc/whisper.md | 2 ++ 1 file changed, 2 insertions(+) diff --git a/docs/source/en/model_doc/whisper.md b/docs/source/en/model_doc/whisper.md index 138f2b374bf..992ff71735d 100644 --- a/docs/source/en/model_doc/whisper.md +++ b/docs/source/en/model_doc/whisper.md @@ -78,6 +78,8 @@ Here is a step-by-step guide to transcribing an audio sample using a pre-trained A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with Whisper. If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource. +- [Fine-tune Whisper](https://huggingface.co/blog/fine-tune-whisper) on your own dataset for better downstream performance. +- [Distil-Whisper](https://huggingface.co/distil-whisper): Upto 6x faster, 2x smaller distilled Whisper models for English. We release the [model checkpoints](https://huggingface.co/distil-whisper), and [distillation code](https://github.com/huggingface/distil-whisper). - A fork with a script to [convert a Whisper model in Hugging Face format to OpenAI format](https://github.com/zuazo-forks/transformers/blob/convert_hf_to_openai/src/transformers/models/whisper/convert_hf_to_openai.py). 🌎 Usage example: ```bash