Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

How to achieve High concurrency in Nginx

2025-01-18 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)05/31 Report--

This article will explain in detail how Nginx achieves high concurrency, Xiaobian feels quite practical, so share it with you for a reference, I hope you can gain something after reading this article.

Interview questions:

How is Nginx implemented concurrently? Why does Nginx not use multithreading? What are the common optimizations for Nginx? 502 What are the possible causes of this error?

Interviewer psychoanalysis

It mainly depends on whether the applicant is familiar with the basic principles of NGINX, because most of the operation and maintenance personnel understand NGINX more or less, but the possibility of really understanding the principle is very few. Understand its principle, can do optimization, otherwise can only copy the same sample, out of the problem also can not start.

People who understand fur will generally build a Web Server and build a Web site; primary operation and maintenance may have HTTPS and configure a reverse proxy; intermediate operation and maintenance define an upstream and write a regular judgment; old birds do performance optimization, write an ACL, and may change the source code (small compilation means no ability to change the source code).

Analysis of interview questions

1. How does Nginx achieve high concurrency?

Asynchronous, non-blocking, uses epoll and lots of low-level code optimizations.

If a server uses a process responsible for a request, then the number of processes is the number of concurrency. Normally, there are many processes waiting.

Nginx uses a master process and multiple walker processes.

The master process is primarily responsible for collecting and distributing requests. Each time a request comes in, the master pulls up a worker process responsible for handling the request.

At the same time, the master process is also responsible for monitoring the status of the rocker to ensure high reliability.

Woker processes are generally set to match the number of cpu cores. The number of requests that nginx's Woker process can handle at any one time is limited only by memory, and can handle multiple requests.

Nginx's asynchronous, non-blocking way of working is taking advantage of latency. While waiting, these processes are idle, so the appearance of a few processes solves a large number of concurrency problems.

Every time a request comes in, a worker process handles it. But not the whole process, to what extent? Process to where blocking may occur, such as forwarding a request to an upstream (backend) server and waiting for the request to return. So, the worker is smart enough to register an event after sending the request: "If upstream returns, let me know and I'll move on." So he went to rest. At this point, if another request comes in, he can quickly process it in this way again. Once the upstream server returns, this event will be triggered, the worker will take over, and the request will continue to go down.

2. Why does Nginx not use multithreading?

Apache: Create multiple processes or threads, and each process or thread will allocate CPU and memory for it (threads are much smaller than processes, so workers support higher concurrency than perfork), and concurrency will consume server resources.

Nginx: uses a single thread to process requests asynchronously and non-blocking (the administrator can configure the number of worker processes in the Nginx main process)(epoll), which does not allocate CPU and memory resources for each request, saving a lot of resources and reducing a lot of CPU context switching. This allows Nginx to support higher concurrency.

3. What are the common optimization configurations of Nginx?

(1)Adjust worker_processes

Indicates the number of workers Nginx generates,*** Practice is 1 worker process per CPU.

To find out the number of CPU cores in the system, enter

$ grep processor / proc / cpuinfo | wc -l

(2)*** worker_connections

Number of clients that Nginx Web Server can serve simultaneously. When combined with worker_processes, gets the number of *** clients served per second

*** Clients/sec = Worker processes * Worker connections

In order to maximize the full potential of Nginx, the worker connection should be set to 1024, the allowable number of processes that the core can run at once.

(3)enable Gzip compression

Compressed file size reduces client http bandwidth, thus improving page load speed

An example of a recommended gzip configuration is as follows:(in the http section)

(4)Enable caching for static files

To enable caching for static files to reduce bandwidth and improve performance, you can add the following command to limit the static files that your computer caches for Web pages:

location ~* . (jpg|jpeg|png|gif|ico|css|js)$ { expires 365d; }

(5) Timeouts

Keepalive connections reduce the CPU and network overhead required to open and close connections, and variables that need to be adjusted to achieve *** performance can be found in:

(6)disable access_logs

Access logging, which logs every nginx request, therefore consumes a lot of CPU resources, thereby reducing nginx performance.

Disable access logging completely

access_log off;

Enable access log buffering if access logging is required

access_log /var/log/nginx/access.log Main buffer = 16k

4. 502 What are the possible reasons for this error?

(1)Has FastCGI been started?

(2)FastCGI worker processes are not enough

(3)FastCGI takes too long to execute

(4)FastCGI Buffer not enough

Nginx, like apache, has front-end buffer limits, and buffer parameters can be adjusted.

fastcgi_buffer_size 32k; fastcgi_buffers 8 32k;

(5)Proxy Buffer is not enough

If you use Proxy-ing, adjust

proxy_buffer_size 16k; proxy_buffers 4 16k;

(6)PHP script takes too long to execute

php-fpm.conf

0s

0s changed to a time

About "Nginx is how to achieve high concurrency" This article is shared here, I hope the above content can be of some help to everyone, so that you can learn more knowledge, if you think the article is good, please share it to let more people see.

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report