Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

This paper expounds the congestion and congestion alleviation of routers and highways according to the queuing theory.

2025-02-21 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Network Security >

Share

Shulou(Shulou.com)06/01 Report--

About this article

It is believed that many people have encountered heavy congestion on the highway during the holidays, but eventually the congestion will be lifted. Others are questioning the length of the router queue, thinking that eventually the router will refuse to serve. Ten years ago, I naively thought how easy it was for highway designers and router switch designers to work. Now, however, when I know more, I find that this is not the case. More trade-offs and games are needed, not only in terms of technology, but also in psychology, sociology and economics.

Therefore, the purpose of this paper is to analyze the guidance of queuing theory to expressways and packet switching networks with the simplest description. There is no complex mathematical derivation in this paper, please complete this derivation by yourself, or recite the relevant chapters of the university probability textbook, if you are not interested, please remember the conclusion, if you have doubts-as I do, please elaborate.

Overview of queuing Theory

The theory behind the queue is the queuing theory. Queuing theory is the core of packet switching. Not only that, it is also the core of all scenarios involving queuing (payment, banking services, etc.), such as highway construction. Anyway, it's very important. In fact, it's also very simple. Referring to a good article "A Dash of Queueing Theory", it simply describes the queuing theory that most people think is very complicated, so I'll take advantage of the weekend to write a review. By the way, I would like to talk about my personal understanding of packet switching and statistical multiplexing.

The theory behind packet switching is queuing theory. In fact, as early as before packet switching, queuing theory has been used for hundreds or even thousands of years. Packet switching network is just theorized at the same time as the spring of network development, the two are married.

Another core of packet switching is statistical multiplexing. In fact, as early as before packet switching, statistical reuse has existed for thousands or even tens of millions of years. The world we live in is statistically reusable. Roads, land, and public facilities are all statistically reused.

The theory behind statistical reuse is still queuing theory, that is, queuing fairness. Of course, in the early days of history, the absence of statistical reuse led to unfairness, resulting in the butterfly effect, followed by the emergence of kings, empires and rulers. However, this is not discussed in this article.

Deploy the queuing process and the served process of tasks (packets, vehicles) according to time

If we unfold a separate queuing process according to time, we will get the following diagram (the enrollment rate is fixed):

So if we unfold the process of consuming the output services of an entity in a consumption queue according to time, we will get the following diagram:

Task (packet, vehicle) queuing process and being served process merge

If the queuing entity is allowed to queue continuously, the queue will become infinitely long. In fact, no queue will be infinitely long, otherwise why would people wait in line for a service that will never be accepted? The abandoned train station in the novel "Ten years" does not exist in reality. Why? Because as long as there is a queue, there will be a service at the head of the queue, that is, at the end of the queue and at the head of the queue. This is a simple first-in-first-out process.

So, if we merge the above two graphs, we will find that the band in the middle of the merged two curves is the queue, as shown in the following figure:

Queuing analysis of tasks with fixed rate input 1. The situation where the task input rate is equal to the absorption rate of the service.

In this case, starting from the initial state, there will be no queuing phenomenon. For example, three people per unit time will need service, while the service desk has three waiters. Under the assumption of ignoring the service time, there will always be three waiters per unit time to serve the people. However, once other factors are added, such as dealing with someone's business for too long (the service time can not be ignored), there will be a queuing phenomenon. Once a service is delayed, it will generally slow down the service output rate, which is the main cause of queuing.

two。 The situation in which the task input rate and the service absorption rate are different.

If the service output rate is greater than the queue input rate and continues to be greater than the queue input rate, the queue will eventually disappear, and if it continues to be less than the queue input rate, the queue will become longer and longer. these are common sense, but these are the basis for in-depth analysis.

3. Conclusion

We can see from the analysis of the following figure that you can increase the service resources a little, but not too much. The effect we want to achieve is to deviate from an angle so that the service output curve and the queue input curve finally intersect:

Real situation: the exponential distribution of the input Poisson distribution and the input interval

The front analysis and diagram make us feel how unstable the queuing system is, either the queue becomes zero or continues to grow. Those who hold this view are bound to question the availability of routers. But in fact, in reality, whether in reality, such as highway congestion, snapping up goods, buying tickets, or in the online world, such as routers, switches, servers, we have never seen a queue changing rapidly to zero or growing continuously. the queue is always moving, and the queuing entity will always be served unless it gives up, and the queue will never lengthen indefinitely. Why is that?

Because the queue enrollment rate is not a constant, but in line with a range of Poisson distribution, to put it bluntly, there is an average enrollment rate, the more deviation from this average enrollment rate, the more impossible to see. For example, if the average enrollment rate is 10, then queuing for 10 entities per unit time is most likely, and queuing for 9 or 11 is also secondary, 8 or 12, 7 or 13, all possible. Then one or 19 is less likely than the previous ones.

Because of the Poisson distribution of the enrollment rate, the queuing interval of the two adjacent queuing entities should also have a distribution law, which can be deduced from the Poisson distribution. Since mathematics is only a tool, I will not stick to the derivation process and post the answer directly. The queuing interval of adjacent queuing entities conforms to the exponential distribution, which means that the next queuer is most likely to arrive in the shortest time, for example, within 1 minute. If 5 minutes have not come, there is little hope that you can expect it to come within 15 minutes, which is also called the law of "the next one coming". This is very in line with common sense, for example, when you are waiting for someone, if he is 20 minutes late, then he is likely not to come, such as the interview, if you have not returned home from the interview, the company just called, so you are likely to be accepted.

This is the real queuing scene under Poisson distribution and exponential distribution.

Queuing Analysis under the premise that the Task input rate accords with Poisson Distribution

The real queuing scenario is shown below:

If you look at the picture from a distance, you will ignore a few bending details. The whole input curve is a straight line, and in fact, the service output curve is the same, as explained by fractal theory. There are two things to keep in mind:

a)。 In most cases, the input rate is the average input rate, which at least converges to the average input rate (Poisson distribution works)

b)。 In most cases, if an is met, the next queuer will arrive soon (the exponential distribution works).

Example of queue congestion and alleviating queue congestion on Expressway 1. Single lane case

When the cars in front stop, the cars behind can't get through. after all, they can't fly. Then there will be a queue, the cars in front drive at a fixed rate between stops, and the vehicles in the convoy leave the queue one after another, but the queue will not disappear quickly, because there are still many vehicles joining the queue at the end of the queue. If the entry rate is the same as the departure rate, the queue will never disappear. If the queuing rate is less than the queuing rate, the queue will never disappear and will continue to grow. If the queuing rate is greater than the enlisting rate, the queue eventually disappears.

two。 Two lanes do not consider the situation of congestion and lane change

This situation can almost be ignored because it is virtually impossible to occur in a statistically multiplexed channel. This situation is about the non-interference superposition of a lane with "congestion at the head of the line" and a lane that does not queue normally, two lanes do not affect each other, one lane is completely stagnant and the other is at full speed. Have you ever seen such a situation? Anyway, I didn't.

Why is that?

One of the prerequisites for the establishment of statistical multiplexing networks or channels is queue fairness! This is the core of the core, and it is precisely because of this principle that packet switching networks have theoretical rationality and availability. Consider circuit-switched networks or trains. Channels are proprietary, and channels exist in the meantime, even if they are free, they cannot be borrowed by others. If each communication entity wants to use a channel, it must apply for a dedicated channel for its own private use. later, in view of the consideration of saving resources, there is a variety of reuse, but the granularity of these reuse is still very coarse, and there will still be free gaps. Take the train TDM on the Beijing-Guangzhou line as an example. Although many trains share this railway line at different times, you will still see a lot of time. There is no traffic on the railway. This strict time slot reuse is strict and requires a considerable clock synchronization mechanism, so the gap between the two adjacent time slots should be long enough and expensive. In other words, for railway freight transport in places where the transport task is not fixed, you cannot guarantee that the time gap T allocated to truck A today will be used by truck A, because it may not have a transport task today. Then if you want to redistribute the time slot T to others, you need a complex scheduling mechanism. What is the highest and reasonable plan? Is to completely eliminate waste. The reuse granularity becomes smaller and finally eliminates the central control and becomes a completely free market, where the reuse process is determined by the communication entity according to its own needs, which is called statistical reuse.

No rules are the best rules!

The most important rule is to jump the queue, plug, and make use of all available space. This exists, that is, reasonable, because all reuse must have a premise, that is, fairness, complete private channels, and TDM,FDM will not cause queuing, because for multiple communication entities, their channels are physically or logically separated, but for statistical multiplexing, channels are completely mixed and fairness needs to be built on its own. When there is no hindrance, everyone is fair, and unfair moments are always perceived when there is a queue and congestion, so we need a fair mechanism that is also needed during the queuing period, that is, adding and changing lanes, that is, self-building virtual output queues (VOQ). Queuing lanes have this right because they are completely equal to normal vehicles on normal lanes, and it is the head of the queue that causes congestion and has nothing to do with the queuers, who are innocent and unfair. In order to take fair measures, we can only lower the unblocked quality of all lanes.

3. Why does congestion spread so fast?

After understanding the above analysis, you will see that under the exponential distribution, the next queuer will come soon, whether for the expectation of the Poisson distribution or for the edge of the Poisson distribution. The next queue is always most likely to arrive in the shortest time interval, which is the root of rapid spread. Compared with the figure above, you can see that the banded area between the two curves increases rapidly, causing local congestion to cause global congestion.

4. Real situation: widespread queuing, large area congestion

This explains why it is impossible not to add congestion when there is a traffic jam on the highway, so what is the real situation? Needless to say, we all understand that sometimes we get stuck on the highway for an hour and walk slowly until we find that two cars collide slightly in the driveway farthest from us. One-way four-lane wide road, how can the accident of the outermost lane affect the innermost lane, and how can it bring a large area, perhaps tens of kilometers of queue length?

In the previous case, congestion and lane change are not allowed, so it only affects one lane, that is to say, it causes queue congestion in one lane. The truth is, the vehicles in this queued lane will definitely feel unfair-- as mentioned earlier, this situation is called "head-of-line congestion (HOL,Head of Line)" in routers. Because vehicles and packets are different, they are self-routed, so vehicles in queued lanes begin to build their own VOQ, that is, virtual output queues. To put it bluntly, they add congestion, change lanes, and change lanes to lanes that are not queued. The latter will trigger a further VOQ self-bonding process, a chain reaction, so that the traffic in the congested lanes is evenly distributed to the normal lanes, so that the input rate of the normal lanes exceeds the norm. Causing the same queue on the normal driveway.

This situation is completely consistent with the situation of the TCP/IP network, that is, for both vehicles and data packets, the network is statistically multiplexed, and there are no strict rules, such as strict TDM,FDM, etc., to put it bluntly, it is to make use of all available space, as long as the channel is empty, I will use it when you do not use it, remember CSMA/CD? That's pretty much it. When the distance between a car and the car in front is more than a safe distance, the car in the next lane will want to be jammed. He will first do a "carrier monitoring, conflict detection", such as flashing lights, to make sure his behavior is known by the car behind, and so on. This is not a good gentlemanly demeanor, but this is the essence of all statistical multiplexing channels. This is an adventurous process of seeking efficiency, or a mutually beneficial and win-win situation, in which there is a conflict in the statistical reuse of hidden rules (if there are no potential rules, you can imagine that in the congested city center, all kinds of trucks, cars, pedestrians, battery cars, after all, there is no friction, you should know that the rearview mirror has a dead corner, and the driver may not be able to grasp the length, width and height of his car. The influence is always local, a small number of events, so it is worth the risk, but once there is a conflict, such as collision, accident, and so on, it will cause large-scale global congestion, which we mentioned earlier, so the next thing to say is, how is this queue congestion alleviated? It will certainly be alleviated, if not, this statistically multiplexed network will be completely unavailable! And read on.

How to alleviate the queue congestion on the highway

We know that roads will be congested, but this kind of congestion will always be relieved. Ideally, once vehicles arriving at a fixed input rate and output at a fixed output rate are queued up, the queue will remain in line forever. This is also the conclusion of the above theoretical analysis, but the real situation is that although the output rate is fixed at the moment when the queue congestion is alleviated, the input rate is not fixed. The Poisson distribution of the input rate finally alleviates the congestion.

Admittedly, adding a lane is equivalent to adding a little bit more service resources. According to our previous discussion, considering a fixed rate of input and a fixed service rate, that is, a fixed service resource, if the input curve and the output service curve are parallel, the queue caused by congestion will always exist and the queue length will remain the same. but if you add even one lane, the output service curve will be steeper upward and will eventually intersect with the input curve, making the queue disappear. It can be described as an effect that is by a margin of error and by a thousand miles.

Of course, considering that the real situation is undoubtedly much more complicated, it is necessary to take into account the further increase in congestion caused by uncivilized behaviors such as congestion and lane change, which is, of course, the bad side. So in the real case, the good thing is that the input curve is not a straight line, its slope is variable, how to change? To put it simply, the trend of this curve is still pointed in the direction of the fixed input rate, but the protocol of each point is generally in line with the Poisson distribution, and its expected value is the slope of the fixed input rate! Let's take a look at how this Poisson distribution rescued us from the quagmire when the output service was restored. There are two situations.

Case 1: output service is suspended-for example, two cars collide or traffic accidents lead to lane closure

When the accident is not dredged, Poisson distribution has no effect on congestion, in any case, the arrival of vehicles only lengthens the queue, only that because the arrival rate of vehicles conforms to the Poisson distribution, sometimes there are more cars and sometimes less. In any case, the queue length increases, but the speed of increase is different.

After the accident was dredged, the traffic at the accident site was fully restored, and the first car in the queue started off first, followed by the second, and then the third. At this time, the vehicle departure rate at the accident point is fixed and at full speed, as if the vehicle passing rate at this point has reached the maximum value of the minimum probability in the Poisson distribution, and the vehicle arrival rate at the back of the queue is still in line with the standard Poisson distribution. Before the queue disappears, the vehicle passage rate of the accident point, that is, the head of the queue, is fixed, and the arrival rate of the vehicle at the end of the queue is larger than that at the end of the queue.

If the vehicle arrival rate at the end of the queue is equal to the traffic rate at the head of the queue, then the queue will be maintained forever!

Case 2: output service delay-for example, when you encounter a toll station

Considering that the number of lanes is equal to the number of toll windows, according to the previous analysis, since the toll station is there and will never disappear, the queue congestion caused by it "will never return to normal". Therefore, the explanation of case 1 cannot be applied to case 2, but we still see that the highway does not have a large area of irrecoverable queue congestion because of toll stations during non-holidays. Why is that?

Because the number of toll windows is greater than the number of lanes, although it increases the delay of a single car, it can handle multiple cars in parallel, so the total throughput (or line speed capacity) has not changed. Overall, queue congestion has been alleviated.

This situation 2 is worth noting, this is what routers do!

At the end of this section, let's look at a basic fact. Shanghai Hujia Expressway (S5 Expressway) demolished the toll station around 2012, removed the central isolated green belt around 2013, and added an one-way lane along with the hard shoulders on both sides. Why do you think that is? Why only one lane is added in one direction instead of two, and the simple matter of why demolishing the toll station and adding one lane has brought about a big increase in capacity. Through the above analysis should be able to get the answer, pay attention, not only in terms of whether to charge or not and the interests of car owners.

There is also a question to consider, that is, the design of the road system of the ancient Roman Empire, but this is not the category of queue congestion, but the category of connectivity, how does connectivity lead to an exponential increase in returns?

Another reason why highway toll stations are free on holidays

It's not just a matter of money, but the toll booth itself is unreasonable. I would like to say a similar sentence, that is, when there is heavy traffic, the backbone router will eliminate some audit policies for traffic and leave it to the other end of the authenticated trusted BGP to enter the edge router inside the IGP domain to complete this audit function. is this similar to canceling charges during holidays? Its capacity is unbearable, not because you want to save money, money can not be saved, all drive, all handed over to tourist attractions.

Like the router, at the beginning of the design, the average capacity, maximum capacity, worst queuing delay, toll station delay and parallelism are all calculated by complex mathematics, which inevitably involves Poisson distribution and exponential distribution. in addition, psychological and climatic factors also have very heavy weights, and this calculation result will not be aimed at sudden traffic, that is to say, No organization will design highways and router buffers as systems based on burst traffic, so that high costs will not bring considerable benefits, but if that is the case, it will really serve the people. This queuing system should only be designed for the average capacity, and the total capacity will be slightly larger than the average capacity to deal with unpredictable small bursts, and more often, rely entirely on the following risky assumptions to alleviate queue congestion:

The input curve is a curved curve, the general trend is a straight line, and its slope is the expected value of the input rate, but in most cases, the output service rate is only slightly larger than the input expectation, in order to achieve most of the time, the output service curve can catch up with the input curve, and in most cases the two curves can intersect!

Router buffer size and scheduling algorithm setting

After analyzing so much, do you suddenly realize that it is not a simple matter how many buffers should be set for the router-that is, the queuing area? Because you have to predict in advance the average traffic, the minimum traffic, the minimum traffic duration, the maximum burst traffic, the duration of the burst traffic, and then make a judgment based on the quality of service-decomposed into input rate and output rate, the saddle side between the cost and the position of the person. In short, the router buffer may be dynamic, in which the mathematical calculation is particularly complex, which is basically a game, so in addition to queuing theory, you also need to understand game theory.

The buffer is set only for the input, and for the output, there is a scheduling algorithm setting. Really, routers are much more complicated than highways, because highway vehicles are self-routed and self-built output queues (through turn signals, plug lane changes, etc.), while routers rely entirely on algorithms within the router for blind navigation.

However, I really hope that the example of the highway will give people a better understanding of the nature of routers, together with packet switching and statistical multiplexing.

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Network Security

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report