Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

How to configure nginx load balancer

2025-02-23 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)05/31 Report--

This article mainly introduces "how to configure nginx load balancer". In daily operation, I believe many people have doubts about how to configure nginx load balancer. The editor consulted all kinds of materials and sorted out simple and easy-to-use operation methods. I hope it will be helpful for you to answer the doubts about "how to configure nginx load balancer". Next, please follow the editor to study!

1. Nginx and related modules source code installation:

Let's start with the modules needed by the source code.

-- nginx-1.13.7.tar.gz

-- openssl-1.1.0g.tar.gz

-- pcre-8.41.tar.gz

-- zlib-1.2.11.tar.gz

Corresponding download address:

Wget http://nginx.org/download/nginx-1.13.7.tar.gz wget https://www.openssl.org/source/openssl-1.1.0g.tar.gz wget http://ftp.pcre.org/pub/pcre/pcre-8.41.tar.gz wget http://www.zlib.net/zlib-1.2.11.tar.gz

(1) decompress nginx:

(2) decompress openssl:

(3) decompress pcre:

(4) decompress zlib:

(5) configure, enter nginx first, and then execute the following statement:

/ configure-- prefix=/usr/local/nginx-- with-http_realip_module-- with-http_addition_module-- with-http_gzip_static_module-- with-http_secure_link_module-- with-http_stub_status_module-- with-stream-- with-pcre=/home/txp/share/nginx/pcre-8.41-- with-zlib=/home/txp/share/nginx/zlib-1.2.11-- with-openssl=/home/txp/share/nginx/openssl-1.1.0g

Then directly make:

Then sudo make install:

Finally we can see the installed nginx in the / usr/local/nginx/ directory:

Now we can try to run nginx and access it (the following access is successful):

Here is a brief summary:

The installation steps of many open source software are probably similar to the following routines (for example, the modules we are going to install later are also installed in this way, so we will not build wheels here)

--. / cofigure

-- make

-- sudo make install

2. Write your own conf file

In the normal development process, we mainly need to configure the nginx.conf file under its conf folder.

Root@ubuntu:/usr/local/nginx# ls client_body_temp conf fastcgi_temp html logs proxy_temp sbin scgi_temp uwsgi_temp

The original content of this document is:

# user nobody; worker_processes 1; # error_log logs/error.log; # error_log logs/error.log notice; # error_log logs/error.log info; # pid logs/nginx.pid; events {worker_connections 1024;} http {include mime.types; default_type application/octet-stream # log_format main'$remote_addr-$remote_user [$time_local] "$request" #'$status $body_bytes_sent "$http_referer" # "$http_user_agent" $http_x_forwarded_for "; # access_log logs/access.log main; sendfile on; # tcp_nopush on; # keepalive_timeout 0 Keepalive_timeout 65; # gzip on; server {listen 80; server_name localhost; # charset koi8-r; # access_log logs/host.access.log main; location / {root html; index index.html index.htm;} # error_page 404 / 404.html # redirect server error pages to the static page / 50x.html # error_page 500 502 503 504 / 50x.html; location = / 50x.html {root html;} # proxy the PHP scripts to Apache listening on 127.0.0.1 50x.html 80 # location ~\ .php$ {# proxy_pass #} # pass the PHP scripts to FastCGI server listening on 127.0.0.1 fastcgi_pass 9000 # # location ~\. Php$ {# root html; # fastcgi_pass 127.0.0.1 root html; 9000; # fastcgi_index index.php; # fastcgi_pass / scripts$fastcgi_script_name # include fastcgi_params; #} # deny access to .htaccess files, if Apache's document root # concurs with nginx's one # # location ~ /\ .ht {# deny all #}} # another virtual host using mix of IP-, name-, and port-based configuration # # server {# listen 8000; # listen somename:8080; # server_name somename alias another.alias; # location / {# root html; # index index.html index.htm #} #} # HTTPS server # # server {# listen 443 ssl; # server_name localhost; # ssl_certificate cert.pem; # ssl_certificate_key cert.key; # ssl_session_cache shared:SSL:1m; # ssl_session_timeout 5m; # ssl_ciphers high # ssl_prefer_server_ciphers on; # location / {# root html; # index index.html index.htm; #} #}}

There is a lot of content here, so let's write a configuration file to access nginx:

Txp@ubuntu:/usr/local/nginx$ sudo mkdir demo_conf txp@ubuntu:/usr/local/nginx$ cd demo_conf/ txp@ubuntu:/usr/local/nginx$ vim demo_conf/demo.confworker_processes 4 http # number of processes events {worker_connections 1024 th # concurrent access} http {server {listen 8888 th # listener port server_name localhost # Server name client_max_body_size 100mbot # number of visits size location / {root / usr/local/nginx/html/ # access to the html page in this local server}

Now let's see if our own configuration is successful, first turn off the previous nginx:

. / sbin/nginx-s stop

Then execute the configuration file:

. / sbin/nginx-c demo_conf/demo.conf

Here is an extension of the basics:

Nginx consists of modules controlled by instructions specified in the configuration file. Instructions are divided into simple instructions and block instructions. A simple instruction consists of space-separated names and parameters, ending with a semicolon (;). Block instructions have the same structure as simple instructions, but do not end with semicolons, but with a set of additional instructions surrounded by curly braces ({and}). If a block instruction can have other instructions inside the curly braces, it is called a context (for example, events,http,server and location). Directives placed outside any context in the configuration file are considered to be the main context. The events and http instructions reside in the main context, server in the http, and location in the http block.

3. Demonstration of load balancing, reverse proxy and access to static resources:

-- reverse proxy principle (ReverseProxy): it means that the proxy server accepts the connection request on the internet, then forwards the request to the server on the internal network, and returns the result obtained from the server to the client requesting the connection on the internet. To put it simply, the real server cannot be directly accessed by the external network, and access must be through a proxy, as shown in the following figure:

In the above picture, there are two gateways, one is the nginx application layer gateway, the other is the router hardware gateway, the nginx and each server are in the same local area network; the router makes a port mapping (nat) to access the nginx directly, giving people the impression that the nginx is on the public network.

Note that the server here does not provide services, so it provides services through the nginx proxy; the outside accesses the public network ip, and then finds the nginx through port mapping

Now let's use nginx for proxy configuration (for example, I use 143s machine here to proxy 141s machine):

Worker_processes 4; events {worker_connections 1024;} http {server {listen 8888; server_name localhost; client_max_body_size 100m; location / {root / usr/local/nginx/html/; proxy_pass http://192.168.29.141; }}}

Note: the machine of 141has also installed nginx, and then when I visit 143of this machine, I actually access the contents of 141of this machine, this is the use of the agent:

-- load balancing: from the perspective of load balancing, it must be used to reduce the access pressure on the server. For example, the greater the number of visits per unit time of a server, the greater the pressure on the server, and when it exceeds its capacity, the server will crash (for example, in the annual Singles' Day activity, Taobao uses the load balancing feature of nginx, otherwise the server will be overwhelmed by so many users active on Taobao that day. Therefore, in order to avoid server crash and let users have a better experience, we share the server pressure through load balancing. We can set up many servers to form a server cluster. When users visit a website, they first visit an intermediate server (that is, our nginx), then let the intermediate server select a less stressful server in the server cluster, and then introduce the access request to the server. In this way, each user visit will ensure that the pressure of each server in the server cluster tends to balance, share the server pressure, and avoid the server crash.

Let me demonstrate the case of load balancing:

Worker_processes 4; events {worker_connections 1024;} http {upstream backend {server 192.168.29.142 weight=2;//weight represents weight server 192.168.29.141 weight=1;} server {listen 8888; server_name localhost; client_max_body_size 100m Location / {# root / usr/local/nginx/html/; # proxy_pass http://192.168.29.141; proxy_pass http://backend;}

Note: weight means more visits. Since all three of my machines have installed nginx, the content shows no difference. In fact, 142s have been visited twice and 141s have been visited once. I have three machines here: 141,142,143:

-access static resources (pictures and videos)

Here I put some pictures on the 143machine, then create an images folder in the / usr/local/nginx directory, and then copy the pictures on the 143machine to images:

Root@ubuntu:/usr/local/nginx# ls client_body_temp fastcgi_temp images proxy_temp scgi_temp vip_conf conf html logs sbin uwsgi_temp root@ubuntu:/usr/local/nginx# mv / home/txp/share/nginx/*.png images/ root@ubuntu:/usr/local/nginx# ls client_body_temp fastcgi_temp images proxy_temp scgi_temp vip_conf conf html Logs sbin uwsgi_temp root@ubuntu:/usr/local/nginx# cd images/ root@ubuntu:/usr/local/nginx/images# ls 1.png 2.png 3.png

Then configure the conf file:

Worker_processes 4; events {worker_connections 1024;} http {upstream backend {server 192.168.29.142 weight=2; server 192.168.29.141 weight=1;} server {listen 8888; server_name localhost; client_max_body_size 100m Location / {# root / usr/local/nginx/html/; # proxy_pass http://192.168.29.141; proxy_pass http://backend;} location / images/ {root / usr/local/nginx/ }

The implementation results are as follows:

Now I'm going to demonstrate video access. Again, I create a media directory and then transfer the test.mp4copy on the 143machine to the media directory:

Root@ubuntu:/usr/local/nginx# mv / home/txp/share/nginx/test.mp4 media/ root@ubuntu:/usr/local/nginx# cd media/ root@ubuntu:/usr/local/nginx/media# ls test.mp4

Conf file configuration:

Worker_processes 4; events {worker_connections 1024;} http {upstream backend {server 192.168.29.142 weight=2; server 192.168.29.141 weight=1;} server {listen 8888; server_name localhost; client_max_body_size 100m Location / {# root / usr/local/nginx/html/; # proxy_pass http://192.168.29.141; proxy_pass http://backend;} location / images/ {root / usr/local/nginx/ } location ~\. (mp3 | mp4) # here uses the regular expression root / usr/local/nginx/media/;}

The results are as follows:

At this point, the study on "how to configure nginx load balancing" is over. I hope to be able to solve your doubts. The collocation of theory and practice can better help you learn, go and try it! If you want to continue to learn more related knowledge, please continue to follow the website, the editor will continue to work hard to bring you more practical articles!

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report