Skip to content

Latest commit

 

History

History
163 lines (118 loc) · 8.68 KB

README_EN.md

File metadata and controls

163 lines (118 loc) · 8.68 KB

中文: Chinese

The Plugin of RT-AK Platform: K210

1.Introduction

License img

This project shows one of the plugins of RT-AK platform

This project uses the Kendryte K210 as the target hardware for AI application development on the RT-Thread RTOS. In order to deploy an AI model into the K210, the Kendryte NNCase tool is integrated into RT-AK.

2.Structure_of_the_Plugin_K210

./
├── backend_plugin_k210                 # Model registration for the RT-AK Lib
│   ├── backend_k210_kpu.c
│   ├── backend_k210_kpu.h
│   └── readme.md
├── datasets                            # Dataset for AI model quantization
│   ├── mnist_datasets
│   └── readme.md
├── docs                                # docs for the K210 platform
│   ├── images
│   ├── Q&A.md
│   ├── Quick start for the K210 platform.md
│   └── version.md
├── generate_rt_ai_model_h.py
├── k210.c                              # rt_ai_<model_name>_model.c 
├── k_tools                             # docs and softwares of Kendryte 
│   ├── kendryte_datasheet_20180919020633.pdf
│   ├── kendryte_standalone_programming_guide_20190704110318_zh-Hans.pdf
│   ├── ncc
│   ├── ncc.exe                         
│   └── readme.md
├── plugin_k210_parser.py               # Inputs of the K210 platform
├── plugin_k210.py                      
└── README.md

3.Parameter_Explanation_of_the_Plugin_K210

$$ RT-AK Input parameters = (RT-AK basic parameters + K210 plugin parameters) $$

  • RT-AK basic parameters,Link

  • Introduction for input parameters of the K210 Platform,for more details: plugin_k210_parser.py

Parameter Description
--embed_gcc Cross compilation tool chain,not essential.
--ext_tools Input path for NNCase,it is used to transform the original AI model to kmodel. Default is ./platforms/k210/k_tools
--inference_type Inference type for AI model. If it is a float type, the AI model will not be quantized, and the KPU will not be used to accelerate computation. Default is uint8.
--dataset Dataset for AI model quantization. Default is --inference-type = uint8
--dataset_format Dataset format of the model quantization. Default is image. For audio dataset, the default is raw
--weights_quantize_threshold Threshold to control quantizing op or not according to it's weigths range, Default is 32.000000
--output_quantize_threshold Threshold to control quantizing op or not according to it's output size, default is 1024
--no_quantized_binary Don't quantize binary ops
--dump_weights_range Dump weights range
--input-type Input type. Default is float or uint8, which is equal to inference type
--clear Delete convert_report.txt ,Default is False

4.Installation_for_the_Plugin_K210

  • It is not necessary to install the Plug-in K210 manually, but you need to clone the RT-AK platform

  • Under the RT-AK/rt_ai_tools, it is only need to execute python aitools.py --xxx and make sure that platform=K210, then the plug-in will be downloaded automatically

5.Instruction_for_the_Plugin_K210_Command_Line

  • Entering the RT-AK/rt_ai_tools, and typing the command as below
  • your_project_path is the BSP path of the RT-Thread RTOS. We provide a BSP, the SDK of the k210 is V0.5.6
# no model quantization, --inference_type
$ python aitools.py --project=<your_project_path> --model=<your_model_path> --platform=k210  --inference_type=float

# no model quantization, set path for the cross compilation tool chain
$ python aitools.py --project=<your_project_path> --model=<your_model_path> --platform=k210 --embed_gcc=<your_RISCV-GNU-Compiler_path> --inference_type=float

# model quantization with uint8 type, accelerating computation with KPU, dataset format is image
$ python aitools.py --project=<your_project_path> --model=<your_model_path> --platform=k210 --embed_gcc=<your_RISCV-GNU-Compiler_path> --dataset=<your_val_dataset>

# model quantization with uint8 type, accelerating computation with KPU, dataset format is not image
$ python aitools.py --project=<your_project_path> --model=<your_model_path> --platform=k210 --embed_gcc=<your_RISCV-GNU-Compiler_path> --dataset=<your_val_dataset> --dataset_format=raw

# example
$ python aitools.py --project="D:\Project\k210_val" --model="./Models/facelandmark.tflite" --model_name=facelandmark --platform=k210 --embed_gcc="D:\Project\k210_third_tools\xpack-riscv-none-embed-gcc-8.3.0-1.2\bin" --dataset="./platforms/plugin_k210/datasets/images"

Other commands:

# set name of the transformed AI model --model_name Default is network
$ python aitools.py --project=<your_project_path> --model=<your_model_path> --model_name=<model_name> --platform=k210 --embed_gcc=<your_RISCV-GNU-Compiler_path> --dataset=<your_val_dataset> --clear

# clear convert_report.txt, --clear
$ python aitools.py --project=<your_project_path> --model=<your_model_path> --platform=k210 --embed_gcc=<your_RISCV-GNU-Compiler_path> --dataset=<your_val_dataset> --clear

6.Compilation_for_the_Embed_Application_Project

Please prepare the cross compilation tool chain: xpack-riscv-none-embed-gcc-8.3.0-1.2

setting the environment:

set RTT_EXEC_PATH=your_toolchains
# modify rtconfig.py, line 22 os.environ['RTT_EXEC_PATH'] = r'your_toolchains'
scons -j 6	

If it is compiled successfully, rtthread.elf and rtthread.bin will be generated, rtthread.bin can be downloaded into the k210 hardware

7.Work_flow_on_the_K210_Platform

  • Checking the validation of AI model
  • AI model is transformed to the kmodel and saved at project/applications
  • kmodel is transformed to hexadecimal type and saved at project/applications
  • rt_ai_<model_name>_model.h is generated and saved at project/applications
  • rt_ai_<model_name>_model.c is generated and saved at project/applications
  • Setting the environment variable RTT_EXEC_PATH into the project
  • Deleting the convert_report.txt