Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Example Analysis of parameters workers and batch-size in yolov5 training

2025-02-25 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Development >

Share

Shulou(Shulou.com)06/01 Report--

This article will explain in detail the example analysis of the parameters workers and batch-size during yolov5 training. The editor thinks it is very practical, so I share it with you for reference. I hope you can get something after reading this article.

Yolov5 training command python.\ train.py-- data my.yaml-- workers 8-- batch-size 32-- epochs 100

The training of yolov5 is very simple. After downloading the warehouse and installing the dependencies, you only need to customize the yaml file in data directory. Here I use a custom my.yaml file, which defines the location of the dataset and the number and name of training categories.

Understanding of workers and batch-size parameters

The main parameters that need to be adjusted in general training are these two:

Workers

Refers to the number of threads used by cpu when loading data. The default is 8. The code is explained as follows

Parser.add_argument ('--workers', type=int, default=8, help='max dataloader workers (per RANK in DDP mode)')

Generally speaking, if 8 is used by default, an error will be reported. The reason is to burst the system memory, in addition to physical memory, the need to adjust the system's virtual memory. During the training, it mainly depends on whether the actual value of where has been submitted will exceed the maximum value, which is either forced withdrawal or error report.

Therefore, it is necessary to allocate the maximum value of the system virtual memory (the disk where the python executes the program) according to the actual situation.

Batch-size

Just how many pictures are stuffed into GPU at a time. Determines the size of the video memory, which defaults to 16.

Parser.add_argument ('--batch-size', type=int, default=16, help='total batch size for all GPUs,-1 for autobatch')

Of course, the greater the occupation of video memory during training, the better, but if you burst the video memory, it will not be able to train. When I use-batch-size 32, the video memory is almost used up.

Tuning of two parameters

For workers, it is not the bigger the better. When the gpu is too big, the training speed is the same, but the virtual memory (disk space) will be doubled.

Memory footprint when workers is 4

Memory footprint when workers is 8

My graphics card is rtx3050, the actual use of above 4 is not much different, gpu is completely full. But if the setting is too small, gpu will be dissatisfied. For example, when workers=1, the power consumption of the graphics card is only 72W, which is half the speed; when workers=4, the power consumption of the graphics card can reach 120MW, which completely drains the computing power of the graphics card. So you need to adjust this parameter according to your actual calculation.

two。 For batch-size, it's a bit of metaphysics. The theory is that it is better to run as much as possible, but in practice, it is found that it is more efficient when it is a multiple of 8. That is, the training efficiency at 32 will be a little higher than that of 34, so it is not clear what the principle is. In practice, it is.

This is the end of the article on "sample analysis of workers and batch-size parameters during yolov5 training". I hope the above content can be helpful to you, so that you can learn more knowledge. if you think the article is good, please share it for more people to see.

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Development

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report