In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-02-27 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >
Share
Shulou(Shulou.com)06/01 Report--
Today, I will talk to you about how to train YOLOv5 on custom data sets. Many people may not know much about it. In order to make you understand better, the editor has summarized the following contents for you. I hope you can get something according to this article.
The target detection model of YOLO series becomes more and more powerful with the introduction of YOLOv5. We will show you how to train YOLOv5 to identify custom objects for your custom use cases.
Thank you very much to Ultralytics for putting this repository together. We believe that, combined with clean data management tools, any developer who wants to deploy a computer vision project in their project can easily use this technology.
We use public blood cell test data sets, which you can export yourself. You can also use this tutorial on your own custom data.
In order to train the detector, we take the following steps:
Install YOLOv5 dependencies
Download custom YOLOv5 object detection data
Define YOLOv5 model configuration and architecture
Train a custom YOLOv5 detector
Evaluate YOLOv5 performanc
Visualization of YOLOv5 training data
Run YOLOv5 inference on the test image
Export saved YOLOv5 weights for future inference
YOLOv5: is there anything new?
Just two months ago, we were very excited about the introduction of EfficientDet by googlebrain and wrote some blog posts about EfficientDet. We think this model may go beyond the prominence of the YOLO family in real-time target detection-we turned out to be wrong.
Within three weeks, YOLOv4 was released under the Darknet framework, and we have written more articles on breaking down YOLOv4.
A few hours before writing this article, YOLOv5 had been released, and we found it very clear.
YOLOv5 is written in the Ultralytics-Pythorch framework, it is very intuitive to use and the reasoning speed is very fast. In fact, we and many of us often convert YOLOv3 and YOLOv4 Darknet weights to Ultralytics PyTorch weights so that we can reason faster using lighter libraries.
Is YOLOv5 better than YOLOv4? We will introduce you soon, you may have a preliminary guess about YOLOv5 and YOLOv4.
Performance comparison between YOLOv5 and EfficientDet
YOLOv4 is clearly not evaluated in the YOLOv5 repository. In other words, YOLOv5 is easier to use, and it performs very well on the custom data we initially ran.
We recommend that you do the following in YOLOv5 Colab Notebook at the same time.
Https://colab.research.google.com/drive/1gDZ2xcTOgR39tGGs-EZ6i3RTs16wmzZQ
Install the YOLOv5 environment
Starting with YOLOv5, we first clone the YOLOv5 repository and install the dependencies. This will set up our programming environment, ready to run object detection training and reasoning commands.
! git clone https://github.com/ultralytics/yolov5 # clone reposting pip install-U-r yolov5/requirements.txt # install dependencies%cd / content/yolov5
Then we can take a look at the Google Colab that our training environment is available to us for free.
Import torchfrom IPython.display import Image # for displaying imagesfrom utils.google_utils import gdrive_download # for downloading models/datasetsprint ('torch% s% s'% (torch.__version__, torch.cuda.get_device_properties (0) if torch.cuda.is_available () else 'CPU'))
Most likely you will receive a Tesla P100 GPU from Google Colab. The following is what I received:
Torch 1.5.0+cu101 _ CudaDeviceProperties (name='Tesla P100murmur16GB, major=6, minor=0, total_memory=16280MB, multi_processor_count=56)
GPU allows us to speed up our training. Colab is also good because it comes pre-installed with torch and cuda. If you try to use this tutorial locally, you may need to perform additional steps to set up YOLOv5.
Download custom YOLOv5 object detection data
In this tutorial, we will download custom object detection data in YOLOv5 format from Roboflow. In this tutorial, we use common blood cell detection data sets to train YOLOv5 to detect cells in the bloodstream. You can use public blood cell data sets or upload your own data sets.
Roboflow: https://roboflow.ai/
Common blood cell data set: https://public.roboflow.ai/object-detection/bccd
A quick description of the marking tool
If you have untagged images, you need to mark them first. For free open source tagging tools, we recommend a guide to getting started with LabelImg or CVAT annotation tools. Try tagging about 50 images before continuing this tutorial. To improve the performance of the model in the future, you will need to add more tags.
Once you have tagged the data, to move the data to Roboflow, create a free account, and then you can drag the dataset in any format: (VOC XML, COCO JSON, TensorFlow object detection CSV, etc.).
After uploading, you can choose preprocessing and enhancement steps:
Settings selected for the BCCD sample dataset
Then, click Generate and Download, and you will be able to choose the YOLOv5 Pythorch format.
Select "YOLO v5 Pythorch"
Be sure to select "Show Code Snippet" when prompted. This will output a download curl script so that you can easily migrate the data to Colab in the correct format.
Curl-L "https://public.roboflow.ai/ds/YOUR-LINK-HERE" > roboflow.zip; unzip roboflow.zip; rm roboflow.zip
Downloading in Colab.
Download custom object datasets in YOLOv5 format
The export creates a YOLOv5.yaml file named data.yaml, specifying the location of the YOLOv5 images folder, YOLOv5 labels folder, and custom class information.
Define YOLOv5 model configuration and architecture
Next, we write a model configuration file for our custom object detector. In this tutorial, we chose the smallest and fastest basic model of YOLOv5. You can choose from other YOLOv5 models, including:
YOLOv5s
YOLOv5m
YOLOv5l
YOLOv5x
You can also edit the network structure in this step, but you generally do not need to do so. The following is the YOLOv5 model configuration file, which we named custom_yolov5s.yaml:
Nc: 3depth_multiple: 0.33width_multiple: 0.50anchors:-- [100.33width_multiple 13,16mage30,33recover23]-- 61,62pence 45, 59119]-[116Power90,156 Focus 198,373326] backbone: [- 1,1, Focus, [64,3], [- 1,1, Conv, [128,3]], [- 1,3, Bottleneck, [128,128], [- 1,1,1,3jue 2], [- 1,9] BottleneckCSP, [1, Conv, [512, 3, 2]], [- 1, 9, BottleneckCSP, [1024, 3, 2]], [- 1, 1, Conv, [1024, 3, 2]], [- 1, 1, SPP, [1024, [5, 9, 13]], [- 1, 6, BottleneckCSP, [1024],] head: [[- 1, 3, BottleneckCSP, [1024, False]] [- 1,1, nn.Conv2d, [na * (nc + 5), 1,1,0], [- 2,1, nn.Upsample, [None, 2, "nearest"]], [[- 1,6], 1, Concat, [1]], [- 1,1, Conv, [512,1,1]], [- 1,3, BottleneckCSP, [512, False], [- 1,1,1] [na * (nc + 5), 1, 1, 0], [- 2, 1, nn.Upsample, [None, 2, "nearest"], [[- 1, 4], 1, Concat, [1], [- 1, 1, Conv, [256,1], [- 1, 3, BottleneckCSP, [256, False]], [- 1, 1, nn.Conv2d, [na * (nc + 5), 1, 1] 0]], [[], 1, Detect, [nc, anchors]],] train custom YOLOv5 detectors
Our data.yaml and custom_yolov5s.yaml files are ready, and we are ready to train!
To start the training, we run the training command using the following options:
Img: define input image size
Batch: determine batch size
Epochs: defines the number of training periods. (note: in general, 3000 + is very common! )
Data: set the path to the yaml file
Cfg: specify our model configuration
Weights: specify a custom path for weights. (note: you can download weights from the Ultralytics Google Drive folder)
Name: result name
Nosave: save only the last checkpoint
Cache: cache images to speed up training
Run the training command:
Train custom YOLOv5 detectors. It trains very fast!
Evaluate the performance of custom YOLOv5 detectors
Now that we have completed the training, we can evaluate the implementation of the training process by looking at the verification indicators. The training script deletes the tensorboard log. We visualize it:
Visualize tensorboard results on our custom dataset
If you can't visualize the tensor board for some reason, the results can also be drawn in utils.plot_result and saved as result.png.
I stopped training early. You need to get the trained model weight where the verification map reaches its highest point.
Visualization of YOLOv5 training data
During the training process, the YOLOv5 training pipeline creates batches of training data through enhancement. We can visualize the authenticity of training data and enhance training data.
Our real training data
Our training data are enhanced by automatic YOLOv5.
Run YOLOv5 inference on the test image
Now we use the model we have trained to reason from the test image. After the training is complete, the model weights save the weights/.
For reasoning, we call these weights and a conf that specifies the confidence of the model (the higher the confidence required, the less the prediction) and a source of inference. The source can accept a directory containing images, individual images, video files, and the webcam port of the device. For the source code, I moved test/*jpg to test-infer/.
! python detect.py-- weights weights/last_yolov5s_custom.pt-- img 416-- conf 0.4-- source. / test_infer
Reasoning time is very fast. On our Tesla P100, YOLOv5s reaches 142 frames per second!
Infer YOLOv5s at the speed of 142 FPS (0.007s/ image)
Finally, we visualize our detector inference on the test image.
YOLOv5 reasoning of Test Images
Export saved YOLOv5 weights for future inference
Now that our custom YOLOv5 object detector has been verified, we may need to extract weights from Colab for real-time computer vision tasks. To do this, we import a Google drive module and send it out.
From google.colab import drivedrive.mount ('/ content/gdrive')% cp / content/yolov5/weights/last_yolov5s_custom.pt / content/gdrive/My\ Drive after reading the above, do you have any further understanding of how to train YOLOv5 on custom datasets? If you want to know more knowledge or related content, please follow the industry information channel, thank you for your support.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.