4. Deployment of Yolov6 Model for General Use¶
4.1. Introduction¶
This document introduces the operation process of deploying the YOLOv6 architecture model on the CV181x development board. The main steps include:
Convert the YOLOv6 model Pytorch version to the ONNX model
Convert onnx model to cvi model format
Finally, write a calling interface to obtain the inference results
4.2. Convert pt Model to onnx¶
Download the official Yolov6 repository [meituan/YOLOv6](https://github.com/meituan/YOLOv6)Download the yolov6 weight file, create a new directory called weights in the yolov6 folder, and place the downloaded weight file in the directory yolov6 main/weights/
Modify yolov6 main/deploy/export_Onnx. py file, and then add a function
def detect_forward(self, x):
final_output_list = []
for i in range(self.nl):
b, _, h, w = x[i].shape
l = h * w
x[i] = self.stems[i](x[i])
cls_x = x[i]
reg_x = x[i]
cls_feat = self.cls_convs[i](cls_x)
cls_output = self.cls_preds[i](cls_feat)
reg_feat = self.reg_convs[i](reg_x)
reg_output_lrtb = self.reg_preds[i](reg_feat)
final_output_list.append(cls_output.permute(0, 2, 3, 1))
final_output_list.append(reg_output_lrtb.permute(0, 2, 3, 1))
return final_output_list
Then use dynamic binding to modify the forward of the YOLOv6 model. You need to first import types and then add the following code before onnx export
print("===================")
print(model)
print("===================")
# Dynamic binding to modify the forward function of model detect
model.detect.forward = types.MethodType(detect_forward, model.detect)
y = model(img) # dry run
# ONNX export
Then enter the following command in the yolov6 main/directory, where:
Weights is the path to the pytorch model file
IMG is the input size for the model
Batch model input batch
Simplify the ONNX model
python ./deploy/ONNX/export_onnx.py \
--weights ./weights/yolov6n.pt \
--img 640 \
--batch 1
And then we get the onnx model
4.3. Onnx Model Conversion cvi model¶
The cvi model conversion operation can refer to the onnx model conversion cvi model section in the Yolo v5 porting chapter.
4.4. Yolov6 Interface Description¶
Provide preprocessing parameters and algorithm parameter settings, including parameter settings:
YoloPreParam input preprocessing settings
The reciprocal of factor preprocessing variance
Mean preprocessing mean
Use_Quantify_Scale Preprocessing Image Size
Format Image Format
YoloAlgParam
Cls sets the classification of yolov6 models
>Yolov6 is an anchor free object detection network that does not require an anchor to be passed in
Additionally, there are two parameter settings for yolov6:
CVI_TDL_SetModelThreshold sets the confidence threshold, which defaults to 0.5
CVI_TDL_SetModelNmsThreshold sets the nms threshold to 0.5 by default
// setup preprocess
YoloPreParam p_preprocess_cfg;
for (int i = 0; i < 3; i++) {
printf("asign val %d \n", i);
p_preprocess_cfg.factor[i] = 0.003922;
p_preprocess_cfg.mean[i] = 0.0;
}
p_preprocess_cfg.use_quantize_scale = true;
p_preprocess_cfg.format = PIXEL_FORMAT_RGB_888_PLANAR;
printf("start yolov algorithm config \n");
// setup yolov6 param
YoloAlgParam p_yolov6_param;
p_yolov6_param.cls = 80;
ret = CVI_TDL_Set_YOLOV6_Param(tdl_handle, &p_preprocess_cfg, &p_yolov6_param);
printf("yolov6 set param success!\n");
if (ret != CVI_SUCCESS) {
printf("Can not set Yolov6 parameters %#x\n", ret);
return ret;
}
ret = CVI_TDL_OpenModel(tdl_handle, CVI_TDL_SUPPORTED_MODEL_YOLOV6, model_path.c_str());
if (ret != CVI_SUCCESS) {
printf("open model failed %#x!\n", ret);
return ret;
}
// set thershold
CVI_TDL_SetModelThreshold(tdl_handle, CVI_TDL_SUPPORTED_MODEL_YOLOV6, 0.5);
CVI_TDL_SetModelNmsThreshold(tdl_handle, CVI_TDL_SUPPORTED_MODEL_YOLOV6, 0.5);
CVI_TDL_Yolov6(tdl_handle, &fdFrame, &obj_meta);
for (uint32_t i = 0; i < obj_meta.size; i++) {
printf("detect res: %f %f %f %f %f %d\n",
obj_meta.info[i].bbox.x1,
obj_meta.info[i].bbox.y1,
obj_meta.info[i].bbox.x2,
obj_meta.info[i].bbox.y2,
obj_meta.info[i].bbox.score,
obj_meta.info[i].classes);
}
4.5. Test Result¶
Converted yolov6n and yolov6s provided by the official yolov6 warehouse, with a test dataset of COCO2017
The threshold parameter is set to:
Conf_Threshold: 0.03
Nms_Threshold: 0.65
Resolution is 640x640
The official export method of the YOLOv6N model onnx performance:
platform |
Inference time (ms) |
bandwidth (MB) |
ION(MB) |
MAP 0.5 |
MAP 0.5-0.95 |
---|---|---|---|---|---|
pytorch |
N/A |
N/A |
N/A |
53.1 |
37.5 |
cv181x |
ion allocation failure |
ion allocation failure |
11.58 |
Quantification failure |
Quantification failure |
cv182x |
39.17 |
47.08 |
11.56 |
Quantification failure |
Quantification failure |
cv183x |
Quantification failure |
Quantification failure |
Quantification failure |
Quantification failure |
Quantification failure |
TDL of yolov6n model SDK export method onnx performance:
platform |
Inference time (ms) |
bandwidth (MB) |
ION(MB) |
MAP 0.5 |
MAP 0.5-0.95 |
---|---|---|---|---|---|
onnx |
N/A |
N/A |
N/A |
51.6373 |
36.4384 |
cv181x |
49.11 |
31.35 |
8.46 |
49.8226 |
34.284 |
cv182x |
34.14 |
30.53 |
8.45 |
49.8226 |
34.284 |
cv183x |
10.89 |
21.22 |
8.49 |
49.8226 |
34.284 |
The official export method of the yolov6s model onnx performance:
platform |
Inference time (ms) |
bandwidth (MB) |
ION(MB) |
MAP 0.5 |
MAP 0.5-0.95 |
---|---|---|---|---|---|
pytorch |
N/A |
N/A |
N/A |
61.8 |
45 |
cv181x |
ion allocation failure |
ion allocation failure |
27.56 |
Quantification failure |
Quantification failure |
cv182x |
131.1 |
115.81 |
27.56 |
Quantification failure |
Quantification failure |
cv183x |
Quantification failure |
Quantification failure |
Quantification failure |
Quantification failure |
Quantification failure |
TDL of yolov6s model SDK export method onnx performance:
platform |
Inference time (ms) |
bandwidth (MB) |
ION(MB) |
MAP 0.5 |
MAP 0.5-0.95 |
---|---|---|---|---|---|
onnx |
N/A |
N/A |
N/A |
60.1657 |
43.5878 |
cv181x |
ion allocation failure |
ion allocation failure |
25.33 |
ion allocation failure |
ion allocation failure |
cv182x |
126.04 |
99.16 |
25.32 |
56.2774 |
40.0781 |
cv183x |
38.55 |
57.26 |
23.59 |
56.2774 |
40.0781 |