Skip to content

Latest commit

 

History

History
 
 

whisper.objc

Folders and files

NameName
Last commit message
Last commit date

parent directory

..
 
 
 
 
 
 

whisper.objc

Minimal Obj-C application for automatic offline speech recognition. The inference runs locally, on-device.

whisper-iphone-13-mini-2.mp4

Real-time transcription demo:

whisper-iphone-13-mini-3.mp4

Usage

git clone https://github.com/ggerganov/whisper.cpp
open whisper.cpp/examples/whisper.objc/whisper.objc.xcodeproj/

// If you don't want to convert a Core ML model, you can skip this step by create dummy model
mkdir models/ggml-base.en-encoder.mlmodelc

Make sure to build the project in Release:

image

Also, don't forget to add the -DGGML_USE_ACCELERATE compiler flag for ggml.c in Build Phases. This can significantly improve the performance of the transcription:

image

If you want to enable Core ML support, you can add the -DWHISPER_USE_COREML -DWHISPER_COREML_ALLOW_FALLBACK compiler flag for whisper.cpp in Build Phases:

image

Then follow the Core ML support section of readme for convert the model.

In this project, it also added -O3 -DNDEBUG to Other C Flags, but adding flags to app proj is not ideal in real world (applies to all C/C++ files), consider splitting xcodeproj in workspace in your own project.