Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

How to deploy YOLOV3-Tiny Model with TensorRT on VS2015

2025-03-26 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >

Share

Shulou(Shulou.com)06/01 Report--

Today, I will talk to you about how to use TensorRT to deploy the YOLOV3-Tiny model on VS2015. Many people may not know much about it. In order to make you understand better, the editor has summarized the following for you. I hope you can get something according to this article.

1. Preface

Hello, everyone. Recently, I tried to deploy the test model with TensorRT on VS2015. After two days of detours, I felt that it would not be so smooth for a complete novice to succeed. So write a deployment article here, hoping to make fewer detours for students who use TensorRT to deploy the YOLOV3-Tiny inspection model.

two。 Are you sure which way to go?

Here I am using the AlexeyAB version of DarkNet to train the YOLOV3-Tiny detection model (including * .cards and * .cfg) using TensorRT to deploy on the 1060 graphics card of NVIDIA. The model transformation path I chose is DarkNet- > ONNX- > TRT. We know that TensorRT can either load ONNX directly or load TRT engine files obtained by ONNX transformation, and it is very simple to convert ONNX model to TRT engine file, which can be done directly in the code, so we first need to focus on the conversion of DarkNet model to ONNX model.

3. DarkNet2ONNX

It is now clear that the first task is to transform the model into an ONNX model. We can finish this with the help of a project on Github, the address of which is https://github.com/zombie0117/yolov3-tiny-onnx-TensorRT. The specific steps are as follows:

The cloning project. Use Python2.7. Execute pip install onnx=1.4.1 to manually add a blank line at the end of the cfg file of YOLOV3-Tiny. Modify the path of yolov3_to_onnx.py 's cfg and weights files and the path to save the ONNX model. Execute the yolov3_to_onnx.py script to get the yolov3-tiny.onnx model.

Let's take a look at the visualization of the yolov3-tiny.onnx model (using Neutron). Here we only look at the key parts:

Yolov3-tiny.onnx visualization

Yolov3-tiny.onnx visualization

You can see that the last YOLO layer in the ONNX model does not exist (ONNX does not support the YOLO layer, so it is ignored), and the last two output layers are convolution layers for feature mapping, which means that the later BBox and post-processing NMS need to be done manually in the code.

4. ONNX2TRT

After obtaining the ONNX model of YOLOV3-Tiny, we can turn ONNX into an engine file for TensorRT, and the code for this conversion is as follows:

/ / ONNX model is converted to TensorRT engine

Name of bool onnxToTRTModel (const std::string& modelFile, / / onnx file)

Name of const std::string& filename, / / TensorRT engine

IHostMemory*& trtModelStream) / / output buffer for the TensorRT model

{

/ / create a builder

IBuilder* builder = createInferBuilder (gLogger.getTRTLogger ())

Assert (builder! = nullptr)

Nvinfer1::INetworkDefinition* network = builder- > createNetwork ()

/ / analyze the ONNX model

Auto parser = nvonnxparser::createParser (* network, gLogger.getTRTLogger ())

/ / optional-uncomment below to view galaxy information for each layer of the network

/ / config- > setPrintLayerInfo (true)

/ / parser- > reportParsingInfo ()

/ / determine whether the ONNX model has been parsed successfully

If (! parser- > parseFromFile (modelFile.c_str (), static_cast (gLogger.getReportableSeverity ()

{

GLogError setMaxWorkspaceSize (1 setFp16Mode (true))

Builder- > setInt8Mode (gArgs.runInInt8)

If (gArgs.runInInt8)

{

SamplesCommon::setAllTensorScales (network, 127.0f, 127.0f)

}

Cout serialize ()

Std::ofstream file

File.open (filename, std::ios::binary | std::ios::out)

Cout size ()

Cout destroy ()

Builder- > destroy ()

Return true

}

After executing this function, yolov3-tiny.trt will be generated in the specified directory. From the following figure, you can see that the engine file is 48.6m, while the original weights file is 34.3m.

Yolov3-tiny.trt5. Forward reasoning & post-processing

There is no need to explain in detail in this part, just give the source code. Because of the space, I uploaded the source code to Github. The address is https://github.com/BBuf/cv_tools/blob/master/trt_yolov3_tiny.cpp. Note that I am using version 6. 0 TensorRT. Modify the path of the ONNX model and the image path to get the correct reasoning results.

After reading the above, do you have any further understanding of how to deploy the YOLOV3-Tiny model on VS2015 using TensorRT? If you want to know more knowledge or related content, please follow the industry information channel, thank you for your support.

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Internet Technology

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report