In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-04-14 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >
Share
Shulou(Shulou.com)05/31 Report--
This article mainly explains the process of installing MySQL-Cluster 7.3.4 on CentOS 6.5. interested friends may wish to have a look. The method introduced in this paper is simple, fast and practical. Next, let the editor take you to learn "the process of installing MySQL-Cluster 7.3.4 on CentOS 6.5"!
Environment description: CentOS6.5 + MySQL-Cluster 7.3.4 (the latest GA version), planning 2 machines, one for control server + load balancing server + data node server, and the other for load balancing server + data node server
First, download. In order to facilitate the installation process, the RPM package is directly used here to avoid the pain of compilation and installation:
First, go to www.msyql.com to download the following RPM installation package (http://dev.mysql.com/downloads/cluster/)). Remember to select the MySQL-Cluster-gpl-7.3.4-1.el6.x86_64.rpm-bundle.tar installation package under RedHat Enterprise Linux/Oracle Linux to avoid the pain of downloading one by one.
Second, environmental cleaning and installation:
1. Clear the old version of mysql:
First use the following command to clean up the mysql installation that came with the previous operating system: yum-y remove mysql
Then use the following command:
Rpm-qa | grep mysql*
For the 2 remaining mysql packages found, delete them in the following command format:
Rpm-e-- nodeps mysql-libs-5.1.71-1.el6.x86_64
2. Prepare for installation of mysql cluster version: put MySQL-Cluster-gpl-7.3.4-1.el6.x86_64.rpm-bundle.tar under a directory (such as / package), and execute the following command to extract:
Tar-xvf MySQL-Cluster-gpl-7.3.4-1.el6.x86_64.rpm-bundle.tar
Get the following list of files:
MySQL-Cluster-client-gpl-7.3.4-1.el6.x86_64.rpm
MySQL-Cluster-devel-gpl-7.3.4-1.el6.x86_64.rpm
MySQL-Cluster-embedded-gpl-7.3.4-1.el6.x86_64.rpm
MySQL-Cluster-server-gpl-7.3.4-1.el6.x86_64.rpm
MySQL-Cluster-shared-compat-gpl-7.3.4-1.el6.x86_64.rpm
MySQL-Cluster-shared-gpl-7.3.4-1.el6.x86_64.rpm
MySQL-Cluster-test-gpl-7.3.4-1.el6.x86_64.rpm
3. Install mysql cluster version:
Create a folder (divided into three categories to create the corresponding folder)
Storage node: mkdir / var/lib/mysql/data
Management node: mkdir / var/lib/mysql-cluster SQL node: no folder authorization is required
Process DIR: mkdir / var/run/mysqld
Use the following command to change permissions to ensure that they are writable:
Chmod-R 1777 / var/lib/mysql
Chmod-R 1777 / var/run/mysqld
Chmod-R 1777 / var/lib/mysql-cluster
Rpm-ivh MySQL-Cluster-server-gpl-7.3.4-1.el6.x86_64.rpm
Rpm-ivh MySQL-Cluster-client-gpl-7.3.4-1.el6.x86_64.rpm
In particular, when the server gpl package is installed, the following prompt will appear to remind us that the initial super account password after the entire cluster installation is in the / root/.mysql_secret file.
-
A RANDOM PASSWORD HAS BEEN SET FOR THE MySQL root USER!
You will find that password in'/ root/.mysql_secret'.
You must change that password on your first connect
No other statement but 'SET PASSWORD' will be accepted.
See the manual for the semantics of the 'password expired' flag.
Also, the account for the anonymous user has been removed.
In addition, you can run:
/ usr/bin/mysql_secure_installation
Which will also give you the option of removing the test database.
This is strongly recommended for production servers.
4. Profile writing and adjustment:
Cd / var/lib/mysql-cluster
Vim config.ini
-
[computer]
Id=mgr-server-01
HostName=10.10.0.1
[mgm default]
Datadir=/var/lib/mysql-cluster
[mgm]
HostName=10.10.0.1
NodeId=60
ExecuteOnComputer=mgr-server-01
PortNumber=1186
ArbitrationRank=2
[ndbd default]
NoOfReplicas=2
DataMemory=8G
IndexMemory=2G
[ndbd]
HostName=10.10.0.1
DataDir=/var/lib/mysql
NodeId=1
[ndbd]
HostName=10.10.0.2
DataDir=/var/lib/mysql
NodeId=2
[mysqld]
HostName=10.10.0.1
NodeId=81
[mysqld]
HostName=10.10.0.2
NodeId=82
-
5. Configure the Mysql file:
Vim / etc/my.cnf
[client]
Socket=/var/lib/mysql/mysql.sock
[mysqld]
Max_connections=100
Datadir=/var/lib/mysql
Socket=/var/lib/mysql/mysql.sock
Ndbcluster
Ndb-connectstring=10.10.0.1
[mysqld_safe]
Log-error=/var/log/mysqld.log
Pid-file=/var/run/mysqld/mysqld.pid
[mysql_cluster]
Ndb-connectstring=10.10.0.1
-
Third, SQL Cluster initial startup command and user password change adjustment: (please start in strict order)
Before performing the first boot, please make sure that the firewalls of the two machines are turned off (service iptables stop or set the firewall port to be accessible, the two ports are communication port 1186 and data port 3306)
Start the mgt console command for the first time: ndb_mgmd-f / var/lib/mysql-cluster/config.ini
Start the equalization node command: ndbd-- initial
Launch data node command: mysqld_safe-- defaults-file=/etc/my.cnf-- explicit_defaults_for_timestamp &
Pay attention to the need to monitor the entire console output during the startup process, and find that there are error messages that need to be resolved in time, according to the contents of the error log.
-
If it works together, use the following command to turn on Management console: ndb_mgm
Input: show
Ndb_mgm > show
Cluster Configuration
-
[ndbd (NDB)] 2 node (s)
Id=1 @ 10.x.0.1 (mysql-5.6.15 ndb-7.3.4, Nodegroup: 0, *)
Id=2 @ 10.x.0.2 (mysql-5.6.15 ndb-7.3.4, Nodegroup: 0)
[ndb_mgmd (MGM)] 1 node (s)
Id=60 @ 10.x.0.1 (mysql-5.6.15 ndb-7.3.4)
[mysqld (API)] 2 node (s)
Id=81 @ 10.x.0.1 (mysql-5.6.15 ndb-7.3.4)
Id=82 @ 10.x.0.2 (mysql-5.6.15 ndb-7.3.4)
-
Fix password:
When the mysqld starts up normally (you can use pgrep mysqld to get the process ID), we can use the following command to modify:
Mysql-u root-p
Random password (see / root/.mysql_secret file for details). After entering, use the following instruction to modify the password:
SET PASSWORD = PASSWORD ('new password')
Several servers with SQL data nodes need to execute the above command once.
-
4. Cluster effect test:
Use the mysql-u root-p password
Enter the corresponding password and log in to the system, and start the new database according to the following command
Create database clustertest
Use clustertest
CREATE TABLE testtable (Count INT) ENGINE=NDBCLUSTER
Note that only tables that use the NDBCluster engine perform synchronization operations, so it is particularly necessary to add this suffix to the above tables
5. Close Cluster: (it needs to be carried out in strict order)
Shut down the data node: mysqld stop (the SQL node can be shut down by mysqladmin shutdown or otherwise.)
Execute on the management node: shell > ndb_mgm-e shutdown
The management node and data node will be safely shut down.
After closing, use the following process detection command to see if it has been checked out:
Pgrep mysqld
Ps aux | grep nbdb
If not, just find the corresponding pid for kill operation.
6. Start the Cluster scheme again:
The order in which the entire cluster is started:
Ndb_mgmd-f / var/lib/mysql-cluster/config.ini
Ndbd
Mysqld_safe-defaults-file=/etc/my.cnf-explicit_defaults_for_timestamp &
-
Miscellaneous notes:
Firewall policy adjustment: iptables-An INPUT-s 192.168.100.0 ACCEPT 24-I eth3-p tcp-m tcp-j ACCEPT
Method 1 of changing password:
Mysqladmin-u root password root (change password) (execute before you have a new password)
Mysqladmin-u root-p 'xxxxxx' password' NewPassword'; (start mysqld execution after installation)
Method 2 to change the password:
UPDATE mysql.user SET Password=PASSWORD ('password 01users') WHERE User='root'
FLUSH PRIVILEGES
Method 3 to change the password:
Let mysql start without security control:
Mysqld_safe-user=mysql-skip-grant-tables-skip-networking
Then use the previous two methods to change the password
Note: the following involves the configuration, if there is no special description, the main and standby machines are the same!
I. introduction of the environment
1. This is the version of my CentOS, CentOS7.1, and both the master and standby are this version.
[root@localhost ~] # cat / etc/RedHat-release
CentOS Linux release 7.1.1503 (Core)
[root@localhost ~] # cat / proc/version
Linux version 3.10.0-229.el7.x86_64 (builder@kbuilder.dev.centos.org) (gcc version 4.8.2 20140120 (Red Hat 4.8.2-16) (GCC)) # 1 SMP Fri Mar 6 11:36:42 UTC 2015
2. Modify the hostname and host
[root@localhost ~] # hostnamectl set-hostname node-01
[root@localhost ~] # vi / etc/hosts
127.0.0.1node-01 adds node-01 after 127s
Second, the planning of the topology diagram:
a)
| |-IP address-|-|-Software-|-|-status-- |
| | 172.21.4.51 (VIP:172.21.4.44) |-| keepalived+nginx |-| Master |
| | 172.21.4.52 (VIP:172.21.4.44) |-| keepalived+nginx |-| Backup |
| |-172.21.4.91-|-IIS- |-|-Web1- |
| |-172.21.4.91-|-IIS- |-|-Web2- |
b)
Internet--
| |
=
| | ISP Router |
=
| | |
| |-Web1 (172.21.4.91) |
| |-HA- | eth0-- > 172.21.4.51 |
|\ /
|\ /
| | = = VIP (172.21.4.44) = =
| | /\ |
| | /\ |
| |-HA- | eth0-- > 172.21.4.52 |
| |-Web2 (172.21.4.92) |
| |
Note: Port 80 has been mapped on the gateway for VIP (172.21.4.44).
Issues to consider in this architecture:
1. If Master is running, Master occupies vip and Nginx can serve normally.
2. If Master fails, Backup preempts vip and Nginx can serve normally.
3. If any front-end nginx service fails, vip resources will be transferred to another server and a reminder email will be sent.
4. Nginx needs to check the health status of the backend server (since the application is a virtual directory hanging under the default website and cannot be changed, it must be able to check the health of the virtual directory)
5. Because the application needs to be maintained by Session, but because there is no Session sharing, when the actual server role is changed, the application will be affected to a certain extent.
III. Preparatory work before installation
In the CentOS7 environment, the FireWallD service is used by default. Even if you modify the iptables and the restart is initialized, you need to manually systemctl restart iptables.service again to make the set iptables take effect. Because FireWallD is not yet familiar with the method of use and does not know what advantages he has over iptables, just to be on the safe side, you still need to change back to the original iptables.
1. Close firewall:
[root@node-01 ~] # systemctl stop firewalld.service
# stop firewall
[root@node-01 ~] # systemctl disable firewalld.service or systemctl mask firewalld.service
# prohibit firewall from booting
2. Install iptables firewall
[root@node-01 ~] # yum install iptables-services-y
[root@node-01 ~] # systemctl enable iptables
3. Before configuring Keepalived and nginx, be sure to fully release the communication between several hosts in the cluster (and open 80 Web access rules), otherwise brain fissure or other problems are likely to occur, you can directly add the following statement to the configuration file
[root@node-01 ~] # vi / etc/sysconfig/iptables
-An INPUT-p tcp-m state-- state NEW-m tcp-- dport 80-j ACCEPT
-An INPUT-s 172.21.4.51-j ACCEPT
-An INPUT-s 172.21.4.52-j ACCEPT
-An INPUT-s 172.21.4.91-j ACCEPT
-An INPUT-s 172.21.4.92-j ACCEPT
[root@node-01 ~] # systemctl restart iptables.service
IV. Installation of Keepalived and Nginx
1. Install ipvsadm
[root@node-01 ~] # yum install ipvsadm
[root@node-01] # ipvsadm-v
Ipvsadm v1.27 2008-5-15 (compiled with popt and IPVS v1.2.1)
Ipvs (IP Virtual Server) is the basis of the entire load balancing, without which fault isolation and failover are meaningless. The concrete implementation of ipvs is accomplished by the program ipvsadm. CentOS7.1 is installed by default.
2. Install keepalived (in fact, 7.1comes with it)
[root@node-01 ~] # yum install keepalived
[root@node-01] # keepalived-v
Keepalived v1.2.13 (03Universe 06Magne2015)
3. Install the latest stable version 1.8.0
Note 1: at first, nginx was installed with yum. When it was found that the backend server's download machine or status was abnormal, nginx also forwarded the request as usual. After the yum erase nginx, it was manually installed. Because additional nginx_upstream_check_module modules need to be added.
Note 2: the installation process will prompt an error of this type. / configure: error: the HTTP rewrite module requires the PCRE library.
So first install the appropriate support library to solve the problem. Pcre, regular expression matching support; zlib, for compression, and so on.
[root@node-01 ~] # yum-y install gcc-c++ pcre-devel zlib-devel
1) set up a www group for nginx, set up a non-login account nginx, and put it into the www user group
[root@node-01] # groupadd-f www
[root@node-01] # useradd-d / var/cache/nginx-s / sbin/nologin-g www nginx
2) create a directory to store nginx log files and grant permissions
[root@node-01 ~] # mkdir / var/log/nginx
[root@node-01 ~] # mkdir / usr/local/nginx
[root@node-01] # chown-R nginx.www / var/log/nginx
[root@node-01] # chown-R nginx.www / usr/local/nginx
3) download nginx, additional templates and install respectively
[root@node-01 ~] # mkdir / nginx
[root@node-01 ~] # cd / nginx
[root@node-01 nginx] # wget http://nginx.org/download/nginx-1.8.0.tar.gz
[root@node-01 nginx] # wget https://github.com/yaoweibin/nginx_upstream_check_module/archive/master.zip
[root@node-01 nginx] # tar-xvf nginx-1.8.0.tar.gz
[root@node-01 nginx] # unzip master.zip
# extract the nginx_upstream_check_module-master directory from the current directory
[root@node-01 nginx] # cd nginx-1.8.0/
[root@node-01 nginx-1.8.0] # patch-p1 < / nginx/nginx_upstream_check_module-master/check_1.7.5+.patch
# due to the stable version 1.8.0 of the installation version, you can choose the patch package check_1.7.5+.patch and other versions accordingly
# if prompted bash: patch: command not found. You should install the patch command package yum-y install patch
[root@node-01 nginx-1.8.0] # / configure-- prefix=/usr/local/nginx-- user=nginx-- group=www-- pid-path=/run/nginx.pid-- error-log-path=/var/log/nginx/error.log-- http-log-path=/var/log/nginx/access.log-- add-module=/nginx/nginx_upstream_check_module-master
[root@node-01 nginx-1.8.0] # make & & make install
To facilitate future operations, make the following settings and make a self-startup file:
[root@node-01] # ln-s / usr/local/nginx/sbin/nginx / usr/sbin/nginx
[root@node-01 ~] # vi / usr/lib/systemd/system/nginx.service
# add the following:
#-Begin--
[Unit]
Description=nginx-high performance web server
Documentation= http://nginx.org/en/docs/
After=network.target remote-fs.target nss-lookup.target
[Service]
Type=forking
PIDFile=/run/nginx.pid
ExecStartPre=/usr/sbin/nginx-t-c / usr/local/nginx/conf/nginx.conf
ExecStart=/usr/sbin/nginx-c / usr/local/nginx/conf/nginx.conf
ExecReload=/bin/kill-s HUP $MAINPID
ExecStop=/bin/kill-s QUIT $MAINPID
PrivateTmp=true
[Install]
WantedBy=multi-user.target
#-end--
[root@node-01 /] # systemctl enable nginx.service
Let nginx.service start itself when the system starts.
4) keepalived and Nginx should also be set to boot.
[root@node-01 ~] # systemctl enable keepalived
[root@node-01 ~] # systemctl enable nginx
V. Keepalived configuration
Turn off selinux before configuring keepalived
A 、 vi / etc/selinux/config
B. Set up SELINUX=disabled save
C, execute setenforce 0
If you only want to shut down temporarily, type: setenforce 0 directly.
[root@node-01 ~] # vi / etc/keepalived/keepalived.conf
# keepalived configuration #
! Configuration File for keepalived
Global_defs {
}
Vrrp_script chk_nginx {
# it must be placed at the top of the track code, otherwise it is invalid. This is the case after several tests.
Script "killall-0 nginx"
# use the shell command to check whether the nginx service exists
Interval 1
# detect once in 1 second
Weight-15
# when the nginx process no longer exists, change the current weight to-15
}
Vrrp_instance VI_1 {
State MASTER
# Master LVS is MASTER. When slave host, this item should be changed to BACKUP, which should be capitalized
Interface enp4s0
# Network Interface for LVS Monitoring
Virtual_router_id 51
# virtual_router_id must be the same and MASTRE/BACKUP setting value must be the same under the same instance
Priority 100
# define priority. The higher the number, the higher the priority. When copying this Conf to another machine, the priority value is set to be lower than the MASTRE weight value.
Advert_int 1
# time interval for synchronization checks between MASTER and BACKUP load balancers (in seconds)
Authentication {
Auth_type PASS
Auth_pass 376879148
# the authentication type and password are PASS and AH. Generally, PASS is used. It is said that there is a problem with AH. The authentication password must be consistent between the master and slave servers, otherwise an error will occur.
}
Virtual_ipaddress {
172.21.4.44
# set virtual IP, which can have multiple addresses, each on a line without a mask. Note: this ip must be consistent with the vip we set in the lvs client
}
Track_script {
Chk_nginx
# referencing the script name defined by vrrp_script above
}
Notify_master "/ etc/keepalived/changemail.py master"
Notify_backup "/ etc/keepalived/changemail.py backup"
Notify_fault "/ etc/keepalived/changemail.py fault"
# specify the script to execute when switching to primary, standby, and failure state
}
#
6. Email reminders when HA status is switched
[root@node-02] # python-V
Python 2.7.5
[root@node-01 ~] # vi / etc/keepalived/changemail.py
# changemail.py uses Python2.7 program to send email # #
#! / usr/bin/python
#-*-coding: UTF-8-*-
Import smtplib
Import socket
Import time
From email.MIMEText import MIMEText
From email.Utils import formatdate
From email.Header import Header
Import sys
# send the relevant information of email, fill in according to the actual situation
SmtpHost = 'smtp.exmail.qq.com'
SmtpPort = '25'
SslPort = '465'
FromMail = 'youki@appi.com'
ToMail = 'youki@appi.com'
Username = 'youki@appi.com'
Password = 'xxxxxxx'
# solve Chinese problems
Reload (sys)
Sys.setdefaultencoding ('utf8')
# email title and content
Subject = socket.gethostname () + "HA status has changed"
Body = (time.strftime ("% Y-%m-%d% H:%M:%S")) + "vrrp transition," + socket.gethostname () + "changed to be" + sys.argv [1]
# initialize email
Encoding = 'utf-8'
Mail = MIMEText (body.encode (encoding), 'plain',encoding)
Mail ['Subject'] = Header (subject,encoding)
Mail ['From'] = fromMail
Mail ['To'] = toMail
Mail ['Date'] = formatdate ()
Try:
# there are three ways to connect to the smtp server, plaintext / SSL/TLS, and choose one according to your SMTP support
# normal method, the communication process is not encrypted
# smtp = smtplib.SMTP (smtpHost,smtpPort)
# smtp.ehlo ()
# smtp.login (username,password)
# tls encryption method, communication process encryption, email data security, using normal smtp port
# smtp = smtplib.SMTP (smtpHost,smtpPort)
# smtp.ehlo ()
# smtp.starttls ()
# smtp.ehlo ()
# smtp.login (username,password)
# Pure ssl encryption, communication process encryption, email data security
Smtp = smtplib.SMTP_SSL (smtpHost,sslPort)
Smtp.ehlo ()
Smtp.login (username,password)
# send email
Smtp.sendmail (fromMail,toMail,mail.as_string ())
Smtp.close ()
Print 'OK'
Except Exception:
Print 'Error: unable to send email'
[root@node-01 ~] #
#
1. After the above script is made, remember to empower it, otherwise it cannot be executed.
[root@node-01 ~] # chmod + x / etc/keepalived/changemail.py
[root@node-01 ~] # scp / etc/keepalived/keepalived.conf 172.21.4.52:/etc/keepalived
Copy the configuration file on the host to the Backup server, and now you can use the ip addr show command to view the VIP acquisition on both hosts.
2. Specify the location of keeplived log:
Compile the / etc/sysconfig/keepalived file on the master-slave keeplived node
[root@node-01 ~] # vi / etc/sysconfig/keepalived
Modify the last line KEEPALIVED_OPTIONS= "- D" to: KEEPALIVED_OPTIONS= "- D-d-S 0"
3. Modify the master-slave node log configuration file / etc/rsyslog.conf
[root@node-01 ~] # vi / etc/rsyslog.conf
Add the following configuration:
# keepalived-S 0
Local0.*/var/log/keepalived.log
4. Restart the log service
[root@node-01 ~] # systemctl restart rsyslog.service
5. Check whether the / var/log/keepalived.log file exists
Note:
1. Output log information: / var/log/messages. For more specific log information output, you need to add the-d parameter when starting keepalived.
2. When both MASTER and priority are the same, the service vrrp start will replace the running node to become active.
3. In the case of a MASTER with a high priority, it is not affected by the down/up of the secondary node, and when it changes from down to up, it will seize control.
4. In the case of MASTER and the same priority, the running primary node down (network outage), the secondary node will automatically take over, and the primary node will not seize control when it gets up again.
# keepalived will execute the script regularly and analyze the results of the script execution, and dynamically adjust the priority of vrrp_instance.
# if the script execution result is 0 and the value of the weight configuration is greater than 0, the priority will be increased accordingly
# if the execution result of the script is not 0 and the value of the weight configuration is less than 0, the priority will be reduced accordingly
# in other cases, maintain the priority of the original configuration, that is, the corresponding value of priority in the configuration file.
# what you should pay attention to here is:
# 1) the priority will not be increased or decreased continuously, and when the object of track is restored, it will be consistent.
# 2) you can write multiple test scripts and set a different weight for each test script
# 3) regardless of whether the priority is raised or lowered, the range of the final priority is in [1254], and there will be no cases where the priority is less than or equal to 0 or the priority is greater than or equal to 255.
# in this way, you can use scripts to detect the status of business processes and dynamically adjust priorities to achieve master / slave switching.
VII. Nginx configuration
[root@node-01 nginx] # vi / usr/local/nginx/conf/nginx.conf
# nginx configuration #
User nginx www
Worker_processes 2
# number of nginx processes. It is recommended to set it to equal to the total number of CPU cores.
# worker_cpu_affinity 00000001 00000010 00000100 00001000 00010000 00100000 01000000 10000000
# assign cpu to each process, assign N processes to N cpu, you can write multiple processes or assign a process to multiple cpu
Error_log / var/log/nginx/error.log crit
# pid / run/nginx.pid
Events {
Use epoll
# epoll is a way of multiplexing IO (I Multiplexing O), but it can only be used in kernels above linux2.6
Worker_connections 102400
# maximum number of connections. According to hardware adjustment, theoretically, the maximum number of connections per nginx server is worker_processes*s
}
Http {
Include / usr/local/nginx/conf/mime.types
Default_type application/octet-stream
Log_format main'$remote_addr-$remote_user [$time_local] "$request"'
'$status $body_bytes_sent "$http_referer"'
'"$http_user_agent"$http_x_forwarded_for"'
Access_log / var/log/nginx/access.log main
Sendfile on
# tcp_nopush on
Server_tokens off
# nginx Hidden version number
Keepalive_timeout 65
Proxy_intercept_errors on
# means to have nginx block a reply with a HTTP reply code of 400 or higher
Gzip on
# this instruction is used to turn on or off the gzip module (on/off)
Gzip_min_length 1k
# set the minimum number of bytes of pages allowed to be compressed. The number of page bytes is obtained from the content-length of the header header. The default value is 0, no matter how much the page is compressed. It is recommended to set the number of bytes greater than 1k. Less than 1k may increase the pressure.
Gzip_buffers 4 8k
# set up the system to get several units of cache to store the compressed result data stream of gzip. 48k represents 4 times the applied memory in units of 8k to install the original data size of 16k.
Gzip_http_version 1.1
# identify the protocol version of http (1.0 Universe 1.1)
Gzip_comp_level 3
# gzip compression ratio, 1 compression ratio minimum processing speed is the fastest, 9 compression ratio is the largest but processing speed is the slowest (transmission is fast but consumes cpu)
Gzip_types text/plain text/css application/json application/javascript application/x-javascript text/javascript text/xml application/xml application/xml+rss
# match the mime type for compression. Whether specified or not, the "text/html" type will always be compressed. After the measured compression of the images on this site, it will increase positively, so cancel the compression of the image class.
Gzip_vary on
# it has something to do with vary headers. Add a http header for proxy servers. Some browsers support compression, and some browsers do not support compression, so avoid wasting unsupported HTTP headers, so judge whether compression is needed according to the HTTP header of the client.
Upstream MyApp {
Ip_hash
# 1. Polling (default): each request is assigned to a different backend server one by one in chronological order, if the backend except
# 2.weight: specify the polling probability. Weight is proportional to the access ratio, resulting in poor performance for backend servers.
# 3.ip_hash: each request is allocated according to the hash result of accessing the ip, so that each visitor regularly accesses one post-question.
# 4.fair (third party): allocate requests according to the response time of the back-end server, and give priority to those with short response time
Server 172.21.4.91:80 max_fails=2 fail_timeout=10s
Server 172.21.4.92:80 max_fails=2 fail_timeout=10s
# max_ failures defaults to 1, and defaults to 10 seconds. By default, the back-end server will stop forwarding if it makes an error once within 10 seconds.
Check interval=3000 rise=2 fall=2 timeout=1000 type=http
# check all nodes in the entry once every 3 seconds, and mark the machine as UP if the request is normal for 2 times
Check_http_send "GET / appicrm HTTP/1.0\ r\ n\ r\ n"
# check the subdirectory of the network address, in this case http://mail.appi.cn/appicrm
}
Server
{
Listen 80
Server_name mail.appi.cn
Charset utf-8
Location ~. *\. (ico | gif | jpg | jpeg | png | bmp | swf | js | css | htm | html) $
{
Access_log image.log
Expires 14d
Root / usr/local/nginx/proxy_cache
Proxy_store on
Proxy_temp_path / usr/local/nginx/proxy_cache_image
If (!-e $request_filename)
# redirect when files and directories do not exist
{proxy_pass http://MyApp;}
# rewrite ^ (. *) http://www.test.com/test/$domain/ break
}
# last: re-execute the address after rewrite in the server tag
# break: execute the address after rewrite in the current location tag
Location / {
Rewrite ^ / (. *) $/ appicrm/$1 last
}
Location ~ * ^ / appicrm/.*$ {
Proxy_set_header Host $host
Proxy_set_header REMOTE-HOST $remote_addr
Proxy_set_header X-Real-IP $remote_addr
Proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for
Proxy_pass http://MyApp;
Client_max_body_size 10m
Client_body_buffer_size 128k
Proxy_connect_timeout 90
Proxy_send_timeout 90
Proxy_read_timeout 90
Proxy_buffer_size 4k
Proxy_buffers 4 32k
Proxy_busy_buffers_size 64k
Proxy_temp_file_write_size 64k
}
Location / webstatus {
Check_status
Access_log off
Error_log off
Auth_basic "Restricted"
Auth_basic_user_file / usr/local/nginx/conf/htpasswd/test
# allow IP
# deny all
}
Error_page 500 502 503 504 / 50x.html
Location = / 50x.html {
Root html
}
}
}
[root@localhost ~] #
#
There are several issues to pay attention to:
1. Pid error prompted as follows
18:11:24 localhost.localdomain systemd [1]: Failed to read PID from file / var/run/nginx.pid: Invalid argument
Solution: comment out the specified PID path line in the / usr/lib/systemd/system/nginx.service file, and then modify the actual storage location of the find nginx PID back to normal, or simply comment out, because my modification went wrong again. Nginx installation when the specified location is not correct, I do not know why! There are many solutions on the Internet, including foreign websites that say to install a variety of support files, which have been tested to be the wrong answer.
2. Webstatus this status query page, you certainly don't want everyone to be able to access it, so it needs to be encrypted.
[root@node-01 ~] mkdir / usr/local/nginx/conf/htpasswd/
[root@node-01] htpasswd-c / usr/local/nginx/conf/htpasswd/test Youki
New password:
Re-type new password:
Adding password for user auth_user
[root@node-01 ~] vi / usr/local/nginx/conf/nginx.conf
Auth_basic "Restricted"
Auth_basic_user_file / usr/local/nginx/conf/htpasswd/test
Add the above two sentences to the / webstatus section
At this point, I believe you have a deeper understanding of the process of installing MySQL-Cluster 7.3.4 on CentOS 6.5. you might as well do it in practice. Here is the website, more related content can enter the relevant channels to inquire, follow us, continue to learn!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.