Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

How to realize Robomaster self-aiming by OpenVINO

2025-03-28 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >

Share

Shulou(Shulou.com)06/01 Report--

Today, I will talk to you about OpenVINO how to achieve Robomaster self-aiming, many people may not know much about it. In order to make you understand better, the editor summarized the following content for you. I hope you can get something according to this article.

In the Robomaster competition, contestants often use color separation, contour extraction and contour matching to identify the armor deck, but they often spend a lot of time in the field to adjust the parameters, so we think: can we use deep learning to do self-aiming to improve its robustness? However, deep learning algorithms usually do not perform well in real-time. Only graphics cards like 1080ti can achieve real-time performance, but no one will bring a gas stove on the robot. Many people will think of using Tensor RT, or model pruning / compression, low-bit reasoning to improve the speed of deep learning algorithm on GPU, but did not expect that the use of pure CPU can also run the neural network in real time. With openvino released by the Intel team, we can run target detection algorithms in real time on Intel CPU or computing rods. Here we introduce the complete implementation steps in the form of CPU+ computing rods. How it works? It is divided into three steps

Train your own model or use the official demo to convert the model to the middle presentation tier deployment. According to the information on the official website, openvino supports TensorFlow best, so here we take Google's model library as an example to get through the above pipeline. Test demo1 to train your own model

This section takes the robomaster dataset as an example, using TensorFlow Object Detection API to train your model.

1.1 Model libraries used

Link: https://github.com/tensorflow/models/tree/master/research/object_detection

TensorFlow Object Detection API is an open source model library for Google Dad, which contains complete training and evaluation code.

The model includes mainstream detection and segmentation networks, including SSD,Faster rcnn,mask rcnn, backbone networks including mobilenet v1/v2/v3 (see Google's dad's bias), and inception v2 resnet 50x101.

SSD family, map stands for detection accuracy, the larger the better, 1.2 data sets

In early 2020, DJI opened up a dataset with an overlooking perspective, with specific features similar to those of people watching live broadcasts. The resolution is 1920 to 1080. The official intention should be to do target detection for radar stations. Heads-up scenes such as self-aiming will have a certain gap, and the resolution is too large, increasing the computational burden. So we made magic changes based on the official data set, and the changes are as follows:

In order to facilitate the evaluation, we change the original VOC data set format to COCO format to perform crop operation on the original image. For each object in a picture, first give the center point of the object a drift up and down within 30 pixels, and then take this point as the center to withhold 400 million 300 images.

The armor deck is very small and the armor deck is visible.

Download link: https://pan.baidu.com/s/105vjTcDs6XZHtnXAgCx86g extraction code: v8yg

1.3 training + evaluation

Prerequisite: graphics card: preferably 1080ti or above. It takes two hours to complete the training of single card v100.

Pip install tensorflow-gpu==1.14 is better off using version 1.14 of TF

Linux system

1.3.1 install TensorFlow Object Detection API

Please refer to the official installation instructions: https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/installation.md

1.3.2 modify the configuration file to convert the coco dataset to tfrecord format python object_detection/dataset_tools/create_coco_tf_record.py-- logtostderr\

-train_image_dir= "data/roco_train"\

-val_image_dir= "data/roco_val"\

-test_image_dir= "data/roco_val"\

-train_annotations_file= "data/train.json"\

-val_annotations_file= "data/val.json"\

-testdev_annotations_file= "data/val.json"\

-output_dir= "models/research/save_dir"

The directory is changed according to its actual location, where test can be ignored

Model config

All model configuration files are in the models/research/object_detection/samples/configs directory, taking ssd_mobilenet_v2_coco.config as an example.

① num_classes: 2 ② image_resizer:height: 300 width: 400 ③ fine_tune_checkpoint ④ final data location data expansion: horizontal flip, brightness, contrast, saturation random jitter

Data_augmentation_options {

Random_horizontal_flip {

}

}

Data_augmentation_options {

Random_adjust_brightness {

Max_delta: 0.2

}

}

Data_augmentation_options {

Random_adjust_contrast {

Min_delta: 0.7

Max_delta: 1.1

}

}

Data_augmentation_options {

Random_adjust_saturation {

Min_delta: 0.9

Max_delta: 1.1

}

}

Data config

The categories of datasets are recorded in models/research/object_detection/data/*.pbtxt. Here we have two categories, so replace the files in label_map_path with the following fields: (note that the file names correspond)

Item {

Name: "/ m/01g317"

Id: 1

Display_name: "armor_blue"

}

Item {

Name: "/ mComp0199g"

Id: 2

Display_name: "armor_red"

}

Training code

Export PYTHONPATH=$PYTHONPATH: `pwd`: `pwd` / slim

CUDA_VISIBLE_DEVICES=0 python object_detection/model_main.py\

-- pipeline_config_path=object_detection/samples/configs/ssd_mobilenet_v2_coco.config\

-- model_dir='output_model'\

-- num_train_steps=500000\

-- sample_1_of_n_eval_examples=10\

-- alsologtostderr

The V100 will be restrained after 2 hours of training, and the 1080ti may be three hours. During the training process, the training will be evaluated at the same time.

Here we focus on mAP (0.5mAP 0.95) and AP (0.5). We can see that the mAP is 0.537 and the AP is 0.974, which basically meets the demand.

Average Precision (AP) @ [IoU=0.50:0.95 | area= all | maxDets=100] = 0.537

Average Precision (AP) @ [IoU=0.50 | area= all | maxDets=100] = 0.974

Average Precision (AP) @ [IoU=0.75 | area= all | maxDets=100] = 0.531

Average Precision (AP) @ [IoU=0.50:0.95 | area= small | maxDets=100] = 0.529

Average Precision (AP) @ [IoU=0.50:0.95 | area=medium | maxDets=100] = 0.613

Average Precision (AP) @ [IoU=0.50:0.95 | area= large | maxDets=100] =-1.000 Average Recall (AR) @ [IoU=0.50:0.95 | area= all | maxDets=1] = 0.220

Average Recall (AR) @ [IoU=0.50:0.95 | area= all | maxDets= 10] = 0.618

Average Recall (AR) @ [IoU=0.50:0.95 | area= all | maxDets=100] = 0.619

Average Recall (AR) @ [IoU=0.50:0.95 | area= small | maxDets=100] = 0.607

Average Recall (AR) @ [IoU=0.50:0.95 | area=medium | maxDets=100] = 0.684

Average Recall (AR) @ [IoU=0.50:0.95 | area= large | maxDets=100] =-1.000

Of course, we also released the model file.

Download link: Baidu Cloud: https://pan.baidu.com/s/1-m1ovofM_X9rh5rlQEicFg extraction Code: 4nra

2 Openvino Model Transformation 2.1 install openvino

For installation under Linux, please refer to the official documentation (very simple): https://docs.openvinotoolkit.org/latest/_docs_install_guides_installing_openvino_linux.html

At the same time, you can also view bilibili's video: https://www.bilibili.com/video/BV1fC4y1s7dt

2.2 Model transformation

Still under the TensorFlow models folder, extract inference_graph first

Python object_detection/export_inference_graph.py\

-- input_type=image_tensor\

-pipeline_config_path=object_detection/samples/configs/ssdlite_mobilenet_v2_coco.config\-trained_checkpoint_prefix=models/research/output_model/model.ckpt-18723\-output_directory=models/research/exported_model

Convert inference_graph to the format accepted by openvino, that is, intermediate representation. It should be noted that ssd mobilenetv2 corresponds to ssd_support_api_v1.15.json.

Python3 mo_tf.py-input_model=exported_model/frozen_inference_graph.pb-transformations_config / opt/intel/openvino/deployment_tools/model_optimizer/extensions/front/tf/ssd_support_api_v1.15.json-tensorflow_object_detection_api_pipeline_config exported_model/pipeline.config-reverse_input_channels

Python test model

Python3 object_detection_demo_ssd_async.py-m / home/lilelife/onnx/ssdv2/frozen_inference_graph.xml-I * .avi

C # test model (remember to compile the source code first)

. / object_detection_demo_ssd_async-I * .mp4-m ssdv2/frozen_inference_graph.xml

The result is the initial GIF.

After reading the above, do you have any further understanding of how OpenVINO can achieve Robomaster self-aiming? If you want to know more knowledge or related content, please follow the industry information channel, thank you for your support.

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Internet Technology

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report