In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-18 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >
Share
Shulou(Shulou.com)05/31 Report--
This article mainly introduces "the performance explanation of redis under high concurrency". In daily operation, I believe that many people have doubts about the performance explanation of redis under high concurrency. The editor has consulted all kinds of materials and sorted out simple and easy-to-use operation methods. I hope it will be helpful to answer the doubts of "performance explanation of redis under high concurrency". Next, please follow the editor to study!
Foreword:
Recently started a project, I am responsible for the architectural design and implementation of the project. Originally, the company has made a lot of API for people outside the company, but when outsiders use it, the link of the interface is given to others, without encryption or concurrency control, where the machine where the interface program is located, where the IP for others is, and there is no platform for management. So I know clearly that the value of these interfaces is hard to find (which interface is more used by others and which interface is less used by others).
Only for this requirement of "monitoring", we introduce redis as the middle tier. First of all, we improve the registration process of users using the interface. Through the user information and address, hash a key, the key corresponds to an address, and the (key-address) pair is stored in the redis. Secondly, the flow of nginx,nginx in our project is something like this:
1. After registering, the user gets his key and accesses it through a completely different url that contains key from the original url.
2. Nginx captures the user's special key, and then the program takes the target address from the redis according to the key, and then nginx visits the real address instead of the user, and then returns.
(there are many benefits in this process.)
(1) the real address is hidden, and the program can interfere with user access outside the upstream server to improve security, and the intervention process can be very complicated.
(2) get the user's information and save it back to redis. The upstream server persists the log in redis into oracle and deletes it through the timing program, and then further analyzes and visualizes it.
Here comes the problem.
The project is still in the testing stage, and the resources are a window server server and a centos6.5 server. In the test phase, there are about 100000 concurrency in 10 seconds. There was no problem in the first day or two of deployment, but then the redis connection failed. To view process access, the following occurs. (under window server)
There are a lot of TCP links for FiN_WAIT_2.
(learn video sharing: redis video tutorial)
Analysis.
First, redis uses a single thread to process connections, which means that it will definitely occur as described in the following two.
2. This is obviously due to the fact that there are a lot of unreleased resources between nginx and redis. Check the status FIN_WAIT_2 of this TCP and explain:
In HTTP applications, there is a problem that SERVER closes the connection for some reason, such as the timeout of KEEPALIVE, so that the SERVER party who actively shuts down will enter the FIN_WAIT2 state, but there is a problem with the TCP/IP protocol stack. The FIN_WAIT2 state is not timed out (unlike the TIME_WAIT state), so if the CLIENT is not closed, the FIN_WAIT_2 state will be maintained until the system restarts. More and more FIN_WAIT_2 states cause the kernel to crash.
Well, I didn't study hard in college. Here are the changes in the status of the http connection.
Client state migration
CLOSED- > SYN_SENT- > ESTABLISHED- > FIN_WAIT_1- > FIN_WAIT_2- > TIME_WAIT- > CLOSEDb.
Server state migration
CLOSED- > LISTEN- > SYN received-> ESTABLISHED- > CLOSE_WAIT- > LAST_ACK- > CLOSED
Defective clients and persistent connections
Some clients have problems dealing with persistent connections (akakeepalives). When the connection is idle and the server closes the connection (based on the KeepAliveTimeout instruction)
The client is programmed not to send FIN and ACK back to the server. This means that the connection will remain in the FIN_WAIT_2 state until one of the following occurs:
The client opens a new connection for the same or different site, which causes it to completely close the previous connection on the socket.
The user exits the client program, so that in some (perhaps most?) The client causes the operating system to shut down the connection completely.
FIN_WAIT_2 timeout, on servers that have FIN_WAIT_2 status timeout settings.
If you are lucky, this means that defective clients will completely shut down the connection and release your server's resources.
However, there are situations where sockets are never completely closed, such as a dial-up client that disconnects from ISP before closing the client program.
In addition, some clients may be left vacant for several days without creating a new connection, and thus keep the socket valid for several days even if it is no longer in use. This is the Bug of the TCP implementation of the browser or operating system.
The reasons are:
1. When a long connection and when the connection is always in the IDLE state, resulting in SERVERCLOSE, CLIENT programming defects, no FIN and ACK packages are sent to SERVER
2. APACHE1.1 and APACHE1.2 added the linger_close () function, which was described in the previous post, which may have caused this problem (why I don't know)
Solution:
one. Time-out mechanism is added to FIN_WAIT_2 state, which is not reflected in the protocol, but has been implemented in some OS
Such as: LINUX, SOLARIS, FREEBSD, HP-UNIX, IRIX, etc.
two. Do not compile with linger_close ()
three. Replace it with SO_LINGER, which works well in some systems
four. Increase the memory mbuf used to store network connection status to prevent kernel crash
five. DISABLE KEEPALIVE
In view of this situation, we have had several discussions, and some conclusions are as follows:
1. Set the connection pool for nginx and redis, and the time for keepalive to 10 seconds and 5 seconds, respectively, but the result is the same.
2. Do not use keepalive, that is, do not use connection pooling, that is, close () is dropped every time you use it. You can see that there are fewer connections, but not using connection pooling, which means opening and closing 100000 times in 10 seconds, which is too expensive.
3. Redis cluster, adding redis cluster to the original cluster system, this may solve the problem, but 100000 in 10 seconds is actually not much. This may be a trick, but the problem is not found.
4. Set the idle time limit for redis, and the result is the same.
Solution:
It's not really a solution, because it abandons redis's memory mechanism and uses nginx's own memory technology. Most of the optimization of redis on the Internet is not applicable, this problem needs to be analyzed and solved.
At this point, the study on the "performance explanation of redis under high concurrency" is over. I hope to be able to solve your doubts. The collocation of theory and practice can better help you learn, go and try it! If you want to continue to learn more related knowledge, please continue to follow the website, the editor will continue to work hard to bring you more practical articles!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.