Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

How to deal with the logical bottleneck of High concurrency Server

2025-04-06 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)05/31 Report--

This article will explain in detail how to deal with the logic bottleneck of high concurrent server, and the content of the article is of high quality, so the editor will share it with you for reference. I hope you will have some understanding of the relevant knowledge after reading this article.

Concurrency, in the operating system, means that several programs in a period of time are between starting and running, and these programs are all running on the same processor, but there is only one program running on the processor at any one time. -Source Encyclopedia

As the name implies, high concurrency means that the system can handle a large number of requests (connections) at the same time within a specified period of time.

So how to measure high concurrency?

High concurrency metric

Response time: the time that the system responds to a request, that is, the time it takes for an http request to return

Throughput: number of requests processed per unit time

QPS (TPS): the number of requests or transactions that can be processed per second

Number of concurrent users: the number of users who use the system functions normally at the same time, that is, the number of users who use it at the same time, and the number of users whose system can operate normally.

According to the above metrics, we can see that the following problems must be solved to improve concurrency:

How to increase the number of concurrent connections?

How to handle business with so many connections?

How to improve the processing level of the application server?

How to use micro-service architecture to improve high concurrency logic?

Don't worry, let's analyze and solve so many problems one by one!

1) how to increase the number of concurrent connections?

As shown in the following figure, the conventional single network connection model can only have one connection corresponding to one thread, and the pressure is concentrated in memory, resulting in a very large memory overhead and a limited number of connections that must be supported! (hang up directly)

Single network connection model

Youdao is not as good as a high-performance server, this pot does not have to be carried by developers! The connection entry of the server is so large (for example, tomcat has only a few thousand connections), then the processing capacity is limited to a few thousand.

How to solve it? Choose the appropriate network IO model or selector, through the use of a thread polling or event trigger, can support tens of thousands or more connections, combined with nginx as the load is even more perfect.

2) how to handle business with so many connections?

We all know that nginx only has the function of reverse proxy and load balancing, and can not deal with specific business logic, can not act as an application server to use. For example, webSphere, tomcat, jetty, etc., but we can use nginx to distribute a large number of connections received to different application servers for business processing in a balanced way (polling, weight, hash)!

Nginx load

3) how to improve the processing level of the application server?

In order to improve the processing level of the application server, you need to know where the bottleneck of your application server is. Generally, there are two:

Database pressure: database is the core module to support product business, and the main pressure of high concurrency of the system also comes from database. The processing methods are as follows:

Database itself: establish effective index, read-write separation, dual master mutual backup, sub-database sub-table (sharding-jdbc, etc.) and other strategies to improve database processing capacity and reduce pressure!

Combined with in-memory database: such as redid, memcached, etc., according to business needs to cache some data dictionaries, enumerated variables and frequent use of data to reduce database access times and improve database processing capacity.

Web cluster architecture diagram

As shown in the web cluster architecture diagram above:

Load multiple application servers with nginx

Use redid/memcached for business caching

Plus database cluster

Constitute the classic web high concurrency cluster architecture.

Business logic in the code:

You can refer to the development specifications in Alibaba's java development manual. In general, you can improve code performance by creating fewer threads, creating fewer objects, adding less locks, preventing deadlocks, creating fewer threads, and paying attention to memory recovery.

In the development, the front and rear end separation architecture mode can be used to improve the front and rear processing capacity, such as dynamic and static separation, loose coupling and so on.

4) how to use micro-service architecture to improve high concurrency logic?

First, take a look at this very popular micro-service architecture diagram:

Microservice architecture diagram

It mainly contains 11 core components, which are:

Core support component

Service Gateway Zuul

Service Registration Discovery Eureka+Ribbon

Service configuration Center Apollo

Authentication and Authorization Center Spring Security OAuth

Service Framework Spring MVC/Boot

Monitoring feedback component

Data bus Kafka

Log monitoring ELK

Call chain monitoring CAT

Metrics Monitoring KairosDB

Health check and alarm ZMon

Current-limiting fuse and current polymerization of Hystrix/Turbine

In addition to the above points to solve the bottleneck of logical processing of highly concurrent servers, we should also consider network factors, such as using CDN acceleration to distribute requests from different locations to different service clusters to avoid the impact of network on speed!

In short, split as much as possible according to your actual business within a reasonable range, and after the split, similar services can achieve overall high performance and high concurrency through horizontal expansion, and at the same time, the more fragile resources will be placed at the end of the link, and the access link will be shortened as far as possible to reduce the resource consumption of each visit. The direct restful model between services uses http calls, or the message middleware of the redis,kafka class. A single service directly uses nginx to do the load cluster, at the same time, the front and rear ends are separated, the database is divided into tables, and so on.

Front-to-back separation

On the high concurrency server logic bottleneck how to deal with here to share, I hope the above content can be of some help to you, can learn more knowledge. If you think the article is good, you can share it for more people to see.

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report