diff --git a/README.md b/README.md index a9072c4..372f198 100644 --- a/README.md +++ b/README.md @@ -56,6 +56,7 @@ The rough roadmap for FL CG is as follows: * [ONNX.js](https://github.com/microsoft/onnxjs): * ONNX.js is a Javascript library for running ONNX models on browsers and on Node.js * ONNX.js has adopted WebAssembly and WebGL technologies for providing an optimized ONNX model inference runtime for both CPUs and GPUs + * [On-Device Training with ONNX Runtime](https://onnxruntime.ai/docs/get-started/training-on-device.html) : ONNX Runtime Training offers an easy way to efficiently train and infer ONNX models on edge devices * [WebAssembly System Interface(WASI)](https://github.com/WebAssembly/wasi-nn) * Why Wasm for ML?: Trained machine learning models are typically deployed on a variety of devices with different architectures and operating systems. WebAssembly provides an ideal portable form of deployment for those models * Why WASI?: Although a whole machine learning framework could be potentially compiled into Wasm, special hardware acceleration is often needed in order to be performant. For example, SIMD instructions such as AVX512 on a CPU can speed up performance by several hundred times. Other hardware acclerator examples are GPU, TPU, FPGA. All of those acceleration mechanisms are not available within Wasm. In addition, the field of machine learning is still evolving rapidly, with new operations and network topologies emerging continuously. It would be a challenge to define an evolving set of operations to support in the API