Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Docs] DeepStream documentations #545

Merged
merged 5 commits into from
Feb 21, 2023
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
48 changes: 48 additions & 0 deletions projects/easydeploy/deepstream/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,48 @@
# Inference MMYOLO Models with DeepStream

This project demonstrates how to inference MMYOLO models with customized parsers in [DeepStream SDK](https://developer.nvidia.com/deepstream-sdk).

## Pre-requisites

### 1. Install Nvidia Driver and CUDA

First, please follow the official documents and instructions to install dedicated Nvidia graphic driver and CUDA matched to your gpu and target Nvidia AIoT devices.

### 2. Install DeepStream SDK

Second, please follow the official instruction to download and install DeepStream SDK. Currently stable version of DeepStream is v6.2.

### 3. Generate TensorRT Engine

As DeepStream builds on top of several NVIDIA libraries, you need to first convert your trained MMYOLO models to TensorRT engine files. We strongly recommend you to try the supported TensorRT deployment solution in [EasyDeploy](../../easydeploy/).

## Build and Run

Please make sure that your converted TensorRT engine is already located in the `deepstream` folder as the config shows. Create your own model config files and change the `config-file` parameter in [deepstream_app_config.txt](deepstream_app_config.txt) to the model you want to run with.

```bash
mkdir build && cd build
cmake ..
make -j$(nproc) && make install
```

Then you can run the inference with this command.

```bash
deepstream-app -c deepstream_app_config.txt
```

## Code Structure

```bash
├── deepstream
│ ├── configs # config file for MMYOLO models
│ │ └── config_infer_rtmdet.txt
│ ├── custom_mmyolo_bbox_parser # customized parser for MMYOLO models to DeepStream formats
│ │ └── nvdsparsebbox_mmyolo.cpp
| ├── CMakeLists.txt
│ ├── coco_labels.txt # labels for coco detection
│ ├── deepstream_app_config.txt # deepStream reference app configs for MMYOLO models
│ ├── README_zh-CN.md
│ └── README.md
```
48 changes: 48 additions & 0 deletions projects/easydeploy/deepstream/README_zh-CN.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,48 @@
# 使用 DeepStream SDK 推理 MMYOLO 模型

本项目演示了如何使用 [DeepStream SDK](https://developer.nvidia.com/deepstream-sdk) 配合改写的 parser 来推理 MMYOLO 的模型。

## 预先准备

### 1. 安装 Nidia 驱动和 CUDA

首先请根据当前的显卡驱动和目标使用设备的驱动完成显卡驱动和 CUDA 的安装。

### 2. 安装 DeepStream SDK

目前 DeepStream SDK 稳定版本已经更新到 v6.2,官方推荐使用这个版本。

### 3. 将 MMYOLO 模型转换为 TensorRT Engine

推荐使用 EasyDeploy 中的 TensorRT 方案完成目标模型的转换部署,具体可参考 [此文档](../../easydeploy/docs/model_convert.md) 。

## 编译使用

当前项目使用的是 MMYOLO 的 rtmdet 模型,若想使用其他的模型,请参照目录下的配置文件进行改写。然后将转换完的 TensorRT engine 放在当前目录下并执行如下命令:

```bash
mkdir build && cd build
cmake ..
make -j$(nproc) && make install
```

完成编译后可使用如下命令进行推理:

```bash
deepstream-app -c deepstream_app_config.txt
```

## 项目代码结构

```bash
├── deepstream
│ ├── configs # MMYOLO 模型对应的 DeepStream 配置
│ │ └── config_infer_rtmdet.txt
│ ├── custom_mmyolo_bbox_parser # 适配 DeepStream formats 的 parser
│ │ └── nvdsparsebbox_mmyolo.cpp
| ├── CMakeLists.txt
│ ├── coco_labels.txt # coco labels
│ ├── deepstream_app_config.txt # DeepStream app 配置
│ ├── README_zh-CN.md
│ └── README.md
```