ExecuTorch uses CMake as its primary build system. Even if you don't use CMake directly, CMake can emit scripts for other format like Make, Ninja or Xcode. For information, see cmake-generators(7).
ExecuTorch's CMake build system covers the pieces of the runtime that are likely to be useful to embedded systems users.
libexecutorch.a
: The core of the ExecuTorch runtime. Does not contain any operator/kernel definitions or backend definitions.libportable_kernels.a
: The implementations of ATen-compatible operators, following the signatures in//kernels/portable/functions.yaml
.libportable_kernels_bindings.a
: Generated code that registers the contents oflibportable_kernels.a
with the runtime.- NOTE: This must be linked into your application with a flag like
-Wl,-force_load
or-Wl,--whole-archive
. It contains load-time functions that automatically register the kernels, but linkers will often prune those functions by default because there are no direct calls to them.
- NOTE: This must be linked into your application with a flag like
executor_runner
: An example tool that runs a.pte
program file using all1
values as inputs, and prints the outputs to stdout. It is linked withlibportable_kernels.a
, so the program may use any of the operators it implements.
Follow the steps below to have the tools ready before using CMake to build on your machine.
- If your system's version of python3 is older than 3.11:
- Run
pip install tomli
- Run
- Install CMake version 3.19 or later:
- Run
conda install cmake
orpip install cmake
.
- Run
Follow these steps after cloning or pulling the upstream repo, since the build dependencies may have changed.
# cd to the root of the executorch repo
cd executorch
# Clean and configure the CMake build system. It's good practice to do this
# whenever cloning or pulling the upstream repo.
(rm -rf cmake-out && mkdir cmake-out && cd cmake-out && cmake ..)
Once this is done, you don't need to do it again until you pull from the upstream repo again, or if you modify any CMake-related files.
The release build offers optimizations intended to improve performance and reduce binary size. It disables program verification and executorch logging, and adds optimizations flags.
-DCMAKE_BUILD_TYPE=Release
To further optimize the release build for size, use both:
-DCMAKE_BUILD_TYPE=Release \
-DOPTIMIZE_SIZE=ON
See CMakeLists.txt
Build all targets with
# cd to the root of the executorch repo
cd executorch
# Build using the configuration that you previously generated under the
# `cmake-out` directory.
#
# NOTE: The `-j` argument specifies how many jobs/processes to use when
# building, and tends to speed up the build significantly. It's typical to use
# "core count + 1" as the `-j` value.
cmake --build cmake-out -j9
First, generate an add.pte
or other ExecuTorch program file using the
instructions as described in
Setting up ExecuTorch.
Then, pass it to the command line tool:
./cmake-out/executor_runner --model_path path/to/add.pte
If it worked, you should see the message "Model executed successfully" followed by the output values.
I 00:00:00.000526 executorch:executor_runner.cpp:82] Model file add.pte is loaded.
I 00:00:00.000595 executorch:executor_runner.cpp:91] Using method forward
I 00:00:00.000612 executorch:executor_runner.cpp:138] Setting up planned buffer 0, size 48.
I 00:00:00.000669 executorch:executor_runner.cpp:161] Method loaded.
I 00:00:00.000685 executorch:executor_runner.cpp:171] Inputs prepared.
I 00:00:00.000764 executorch:executor_runner.cpp:180] Model executed successfully.
I 00:00:00.000770 executorch:executor_runner.cpp:184] 1 outputs:
Output 0: tensor(sizes=[1], [2.])
Following are instruction on how to perform cross compilation for Android and iOS.
- Prerequisite: Android NDK, choose one of the following:
- Option 1: Download Android Studio by following the instructions to install ndk.
- Option 2: Download Android NDK directly from here.
Assuming Android NDK is available, run:
# Run the following lines from the `executorch/` folder
rm -rf cmake-android-out && mkdir cmake-android-out && cd cmake-android-out
# point -DCMAKE_TOOLCHAIN_FILE to the location where ndk is installed
cmake -DCMAKE_TOOLCHAIN_FILE=/Users/{user_name}/Library/Android/sdk/ndk/25.2.9519653/build/cmake/android.toolchain.cmake -DANDROID_ABI=arm64-v8a ..
cd ..
cmake --build cmake-android-out -j9
adb shell mkdir -p /data/local/tmp/executorch
# push the binary to an Android device
adb push cmake-android-out/executor_runner /data/local/tmp/executorch
# push the model file
adb push add.pte /data/local/tmp/executorch
adb shell "/data/local/tmp/executorch/executor_runner --model_path /data/local/tmp/executorch/add.pte"
For iOS we'll build frameworks instead of static libraries, that will also contain the public headers inside.
- Install Xcode from the Mac App Store and then install the Command Line Tools using the terminal:
xcode-select --install
- Build the frameworks:
./build/build_apple_frameworks.sh
Run the above command with --help
flag to learn more on how to build additional backends
(like Core ML, MPS or XNNPACK), etc.
Note, some backends may require additional dependencies and certain versions of Xcode and iOS.
- Copy over the generated
.xcframework
bundles to your Xcode project, link them against your targets and don't forget to add an extra linker flag-all_load
.
Check out the iOS Demo App tutorial for more info.
You have successfully cross-compiled executor_runner
binary to iOS and Android platforms. You can start exploring advanced features and capabilities. Here is a list of sections you might want to read next:
- Selective build to build the runtime that links to only kernels used by the program, which can provide significant binary size savings.
- Tutorials on building Android and iOS demo apps.
- Tutorials on deploying applications to embedded devices such as ARM Cortex-M/Ethos-U and XTensa HiFi DSP.