Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Open × × single thread reason

2025-04-01 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)06/03 Report--

This article briefly talks about a problem that few people have considered, that is, why Open is designed as a single-threaded, single-process. I have been waiting for the launch of the new version of Open × × ×. The first thing is to look at its ChangeLog, but I did not find multiXXX. I have never understood why there is a single-process and single-threaded Open in the multi-core era. I finally have the answer after reading through the maillist of Open × ×.

The greatest advantage of multithreading in a multi-core system is that it can maximize the use of multiple processor cores. however, a good algorithm is most concerned about how to divide the problem to be solved into multiple problems that can be processed in parallel, and then assign them to different processor cores to deal with. In the typical Cmax S model, threads or processes are generally dispatched according to the customer. For example, Apache does this, in fact, IIS is not so bad, and the classic UNIX network programming specification also recommends doing so, so it is easy to take Apache for granted. Although there is an event mechanism to replace the simple per- customer-per-thread, but its underlying implementation is not so bad, but the user mode of api has changed the most.

Now let's see why Open only uses a single processor core. The reason is very simple, that is, the problem cannot be well segmented in the IP layer. If the problem is divided according to the user, it is very likely that a user will not transfer data after being connected, resulting in idle threads. If it is segmented according to the actual access target, it will result in a complex situation of Cartesian product. Therefore, the best way is to let × × build themselves to complete the segmentation of the problem, and then improve the efficiency by injecting technologies such as load balancing and virtual network card bonding. However, sadly, in fact, very few people do so, and most people are very satisfied with the existing performance of Open × × single thread. I google the relevant Open × × speed-up scheme on the Internet, and what is almost tearful is that almost all the articles I searched were written by myself. I am also an ordinary programmer, I also have a task, and I am not a full-time data organizer in a research institute, so I also hope to interact with each other, and I can't just give and not ask for it.

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report