In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-19 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >
Share
Shulou(Shulou.com)06/01 Report--
This article introduces you how the application of LimitLatch in Tomcat is, the content is very detailed, interested friends can refer to it, hope to be helpful to you.
The LimitLatch class of Tomcat is used to control the upper limit of socket reception for network communication, which is introduced in Tomcat7 and is easy to implement, so that you can learn about thread synchronization.
LimitLatch relies on the inner class Sync for thread synchronization, while Sync inherits from the familiar AbstractQueuedSynchronizer. AQS is the core component of java.util.concurrent, many commonly used thread synchronization tool classes can find its shadow, readers can read the source code of ReentrantLock, CountDownLatch, Semaphore and other classes.
/ / if we have reached max connections, wait
CountUpOrAwaitConnection ()
SocketChannel socket = null
Try {
/ / Accept the next incoming connection from the server socket
Socket = serverSock.accept ()
……
Both NIO and BIO,Tomcat obtain resources through the countUpOrAwaitConnection method before receiving socket. If the maximum number of connections has been reached, the current thread needs to wait for the resource to be released. This method eventually calls the acquireSharedInterruptibly method of the inner class Sync of LimitLatch, that is, the acquireSharedInterruptibly method of AQS.
From the overloaded method of the inner class Sync, we can see that Sync is a synchronizer of shared mode, overloading two methods tryAcquireShared and tryReleaseShared, and the reason why the two methods are so simple is that the parent class AQS silently completes all other logic such as queuing, waiting, activation and so on.
Protected int tryAcquireShared (int ignored) {
Long newCount = count.incrementAndGet ()
If (! released & & newCount > limit) {/ / obtain success / / Limit exceeded if the resource limit is not exceeded after the increment
Count.decrementAndGet (); / / failed to acquire resources, fallback
Return-1
} else {
Return 1
}
}
When obtaining shared resources, LimitLatch.Sync uses the atomic variable AtomicLong, and uses the result of its self-increasing CAS atomic operation to compare with the set upper limit of the number of shared resources. If the limit is exceeded, the resource cannot be obtained at present, and the AQS will put it in the waiting queue and wait for it to be triggered next time. The released property is defined in LimitLatch, and when it is true, you will get the shared resource anyway.
Public boolean releaseAll () {
Released = true;// resources can be obtained later when the flag location is ture
Return sync.releaseShared (0); / / notify the waiting thread to retrieve the resource
}
There is a problem here, since resources will be obtained anyway, there is no need for LimitLatch to exist, so why such a seemingly superfluous released attribute? In fact, a problem of state change is considered here. When a LimitLatch controls the quantitative change of resources without LimitLatch, it is not enough to set LimitLatch to null so as to skip resource competition.
If there is a thread waiting for a resource in the waiting queue before, and there is no resource release at this time, the thread will still be in the waiting state after the status change, which is not consistent with the "unlimited" state. In this case, you need to set the released attribute to true, and then all waiting threads are triggered by AQS to retrieve the resource through a resource release. At this time, all threads will obtain the resource and return immediately.
Protected boolean tryReleaseShared (int arg) {
Count.decrementAndGet (); / / self-decreasing release of resources
Return true
}
The code when the resource is released is even simpler, and the atomic variable AtomicLong that represents the resource is directly subtracted to release the resource. The subsequent tasks such as waking up threads waiting for resources have been done by AQS.
At this point, the problem comes again. This function can be accomplished by the Semaphore class that comes with JDK. If you have to write another one, it must be for performance reasons. After all, this class should be used in front of receiving Socket, which has a direct impact on performance. The following code is the FairSync overridden tryAcquireShared method of the Semaphore class (JDK1.8), which is essentially no different from LimitLatch, but is CAS spin:
Protected int tryAcquireShared (int acquires) {
For (;;) {
If (hasQueuedPredecessors ())
Return-1
Int available = getState ()
Int remaining = available-acquires
If (remaining < 0 | | compareAndSetState (available, remaining))
Return remaining
}
}
Needless to say, start the performance test, the test scenario is divided into 64 threads competing for 64 resources and 64 threads competing for 32 resources, with 300w cycles. The result of the test is that the performance of LimitLatch is about 10% lower than that of Semaphore. I must have opened it in the wrong way.
About how the application of LimitLatch in Tomcat is shared here, I hope the above content can be of some help to you, can learn more knowledge. If you think the article is good, you can share it for more people to see.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.