Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

How to understand the thread grouping competition mode in multi-core programming

2025-01-15 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Development >

Share

Shulou(Shulou.com)06/02 Report--

This article mainly introduces "how to understand thread grouping competition mode in multi-core programming". In daily operation, I believe many people have doubts about how to understand thread grouping competition mode in multicore programming. The editor consulted all kinds of data and sorted out simple and easy-to-use operation methods. I hope it will be helpful to answer the doubts about "how to understand thread grouping competition mode in multicore programming". Next, please follow the editor to study!

In multi-core programming, the CPU hunger caused by lock competition is one of the most important reasons why the performance of multi-core CPU can not be brought into full play. The impact of lock competition on performance has been discussed in the article on lock competition in multi-core programming. How to eliminate the CPU hunger caused by lock competition has become an urgent problem to be solved.

The lock-free programming technology developed by the industry can effectively reduce the performance degradation caused by lock competition. lock-free programming mainly uses atomic operation instead of lock, and there is only atomic operation competition, because atomic operation is only an instruction. The speed is very fast, so it can be approximately regarded as lock-free competition, unless the atomic operation is very frequent. Lock-free programming is very difficult, and from the current situation, it is unrealistic for ordinary programmers to do lock-free programming themselves. And at present, only a few data structures can realize lock-free programming. From the current commercial lock-free programming library NOBLE, it only provides a few lock-free programming structures, such as queue, stack, linked list, dictionary, garbage collection memory management with reference count, and can only solve the problem of partial lock competition. The price of this library is high. Here is the price copied from NOBLE's website.

$1395 USD, NOBLE Professional Edition, Evaluation License 1 Months, Windows

$3295 USD, NOBLE Professional Edition, Evaluation License 3 Months, Windows

Looking at this price, it is estimated that not many domestic companies are willing to pay such a high price for this library, which has limited functions and is not as convenient as the previous libraries that use locks.

Since there is no free lunch for unlocked programming, can using locks to program avoid CPU hunger caused by lock competition? The answer is yes, this is the thread grouping contention mode in the article title, which uses locks to protect shared data, but avoids CPU hunger during lock contention. In fact, this model has been a very successful practice in the industry, that is, queue pool. Of course, the application of this mode is not only limited to the queue pool, but also can be used in many other shared data protection that meets certain conditions.

First, let's take a look at the basic idea of thread grouping contention mode. The so-called thread grouping contention is to divide threads into several groups. There is lock contention among threads in each group, but there is no lock contention between threads in different groups. The following figure shows the contention of two grouping threads with add and delete operations as examples:

The figure shows the thread competition between two groups. there are four threads divided into two groups for competition, and there is lock competition between add operation thread 1 and delete operation thread 1. There is lock competition between add operation thread 2 and delete operation thread 2, but there is no lock competition between add operation thread 1 (or delete operation thread 1) and add operation thread 2 and delete operation thread 2.

In this competitive mode of packet lock, at least one thread in any group of threads is executing, so if there are N groups of threads, then at least N threads are executing. If N is greater than or equal to the number of cores of CPU, then there are threads running on any CPU core all the time, which can fully guarantee that CPU will not produce hunger.

Not any shared data can be accessed in the form of thread grouping competition, the shared data must be divided into several independent sub-data, each operation only needs to operate a certain sub-data, there is no need to operate multiple sub-data in one operation.

Thread grouping competition mode has great advantages over lock-free programming, and there are two most important advantages:

First, the use of locked programming, programming is very difficult, easy for ordinary programmers to master.

Second, concurrency is better than lock-free programming, and there is a problem of atomic operation competition in lock-free programming, and its fierce competition will increase with the increase of the number of CPU cores, especially when atomic operations are included in a large cycle, atomic operation competition will reduce the performance to the same as that of single-core CPU in the worst case. In the thread grouping mode, the CPU cores run completely in parallel, and there is no competition among CPU cores, so the increase in the number of CPU cores will not cause any impact.

The above is a situation of thread grouping competition mode, in fact, many situations are not completely consistent with this mode, so there are some variants of thread grouping mode to adapt to more actual situations. The following is one of the most common variants:

As shown in the figure above, there is only one add operation thread and two delete operation threads in this variant. Add operation thread A uses lock1 if it operates on sub-memory area 1 and lock2 when operating on sub-memory area 2. Adding operation thread A can compete with delete operation thread 1 and delete operation thread 2, but there is no lock competition between delete operation thread 1 and delete operation thread 2. From the running point of view, at least 2 of the three threads are running at the same time, and if there are N sub-memory areas and N delete threads, then at least N threads are running at the same time, so in this lock contention mode, it can also ensure that when the number of CPU cores is less than or equal to the number of thread groups, CPU hunger will not occur. Queue pool is a good successful practice of this model.

Thread grouping contention mode is the most effective way to eliminate the performance degradation of multi-core CPU caused by lock contention. Its performance is similar to that of multithreaded programming in single-core CPU. This mode is also an effective method to design local distributed data structures.

At this point, the study on "how to understand the thread grouping competition mode in multi-core programming" is over. I hope to be able to solve your doubts. The collocation of theory and practice can better help you learn, go and try it! If you want to continue to learn more related knowledge, please continue to follow the website, the editor will continue to work hard to bring you more practical articles!

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Development

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report