In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-02-25 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)06/03 Report--
Catalogue
Phase II project technical document 10
Revised version: 10
1. Technical Forum Building Project 10
2. Project Topology 11
3. Project requirements 11
4. Project host planning 12
5. Ansible Automated deployment Services 2.013
5.1. install ansible 13
5.2. Configure secret-free login 13
5.3. Create a task directory 14
Batch deployment of apache 14
5.4.1. Package the apache installed on this server 14
5.4.2. Build apache Task 14
5.4.3, Auxiliary installation script 14
5.5.batch deployment _ Nginx 15
5.5.4. Package the nginx installed on this server 15
5.5.5, build httpd Task 15
5.6.Mass deployment of PHP 16
Batch deployment of MySQL 18
5.8.Mass deployment of logstash 19
5.9. Batch deployment of zabbix-agent 19
5.10. Script site.yml 20
5.11. Deploy discuz 20
5.11.6. Create a mysql test account for discuz 20
5.11.7. Upload the package to the root directory 20
5.11.8. Create script 21
5.11.9. Create a discuz installation script. It is recommended to execute 21 directly on the server.
5.11.10. Create call script 21
5.11.11, run script 21
6. Deploy MHA 22
6.1. Experimental environment 22
6.1.1, host configuration. twenty-two
6.1.2. Configure hostname mapping. twenty-two
6.1.3. Configure secret-free authentication (all nodes) 22
6.1.4. Upload the software package. twenty-two
6.1.5. Yum installation software dependency package 23
6.1.6, install software dependencies and software packages for each node node
6.2. install MHA Manager 24
6.2.7. Install MHA Manger dependent perl module 24
6.2.8. Install MHA Manager package: 24
6.3. Build a master-slave replication environment 25
6.3.9. Configure the main database server 25 in mha-master
6.3.10. Configure slave service in mha-slave1: 25
6.3.11. Configure slave service in mah-slave2: 27
6.3.12. Two slave servers set up read_only 28
6.3.13. Set the way to clear relay log (on each slave node) 28
6.4.Configuring MHA 28
6.4.14. Create a working directory for MHA and create related configuration files 28
6.4.15. Write script / usr/bin/master_ip_failover, perl scripting language 29
6.4.16. Set VIP address (master) 31
6.4.17. Check the SSH configuration 31
6.4.18. Check the status of the entire cluster replication environment 31
6.4.19. Enable MHA Manager monitoring 32
6.4.20. Check whether MHA Manager monitoring is normal 32
6.4.21. View startup status 32
6.4.22. Open a new window to observe the log 32
7. Ceph Cluster 33
7.1. Environmental preparation: 33
7.1.1. Add a 20g hard disk to the first three servers 33
7.1.2, close selinux, turn off firewall: (all nodes) 34
7.1.3. Configuration hosts file: (all nodes) 34
7.1.4. Configure ssh secret-free login (management node) 34
7.1.5. Upload the software package and extract 34
7.1.6. Yum source configuration: 35
7.1.7. Copy the ceph package and related files to cong12,cong13: 35
7.1.8. Deploy ceph on ceph0,ceph2,ceph3 and install dependency tools: 35
7.2. deployment of Ceph0: 35
7.2.9. Create monitor service: 35
7.2.10. Number of modified copies: 36
7.2.11. Install ceph monitor: 36
7.2.12. Keyring file of the collection node: 37
7.2.13. Deploy osd service: 37
7.2.14. Deploy mds services 40
7.2.15. View cluster status 40
7.3.Create ceph file system 40
7.3.16. View the file system before creation 40
7.3.17. Create storage pool 40
7.3.18. Create a file system 41
7.3.19. View the ceph file system 41
7.3.20. View mds node status 41
7.4.The kernel driver mounts the Ceph file system 41
7.4.21. Create mount point 41
7.4.22, mount 41 using key
7.4.23, mount 42 using key file
7.5. the use of RBD 42
7.5.24. Check whether the liunx kernel supports RBD 42.
7.5.25. Create rbd storage pool 42
7.5.26. Create block devices of specified size 42
7.5.27. View test1 information 42
7.5.28, mapped into kernel 43
7.5.29, mount use 43
8. Nginx+apache dynamic and static separation 44
8.1. Deploy nginx 44
8.1.1. Upload the required software package and extract it to the specified directory. You need to use the rz command to install it. (nginx1) 44
8.1.2, hidden version number 44
8.1.3. Install nginx dependency package 45
8.1.4, pre-compiled 45
8.1.5, compile and install 45
8.1.6. Start viewing nginx port 45
8.1.7. Check the version number and test the web page 45
8.1.8. Modify the nginx running account 46
8.1.9. Generate nginx startup script 47
8.1.10. Configuration service boot self-startup 47
8.2. Deploy apache (both are required) 48
8.2.11. Upload the required software package and extract it to the specified directory 48
8.2.12. Hide version information 48
8.2.13. Install apache dependency package 48
8.2.14. Delete the previous httpd 49 if any
8.2.15, precompile & compile and install 49
8.2.16. Modify the main configuration file to speed up the restart of apache 49
8.2.17. Set up apache startup script 49
8.2.18. Set the script to boot automatically 49
8.2.19. Start the service and set up the soft connection 50
8.2.20, insert the card version number, visit the web page to test 50
8.2.21. Modify the configuration file again (hidden version number) 50
8.2.22. Modify the apache running account 51
8.3. Build php 52 on apache
8.3.23. Upload the libmcrypt,php package and extract it to the specified directory 52
8.3.24. Go to the libmcrypt directory, compile and install 52
8.3.25, PHP pre-compilation 53
8.3.26, compilation and installation 53
8.3.27, generate configuration file 53
8.3.28. Add apache support php module 53
8.3.29. Create php test page, mysql connection page 53
8.3.30, restart the httpd service, web page test 54
8.4. Apache2 and apache1 configurations are the same 55
8.5.Configuring nginx load balancing 55
8.5.31. Modify the nginx main configuration file 55
8.5.32, restart the nginx server, edit the test file in apache, visit the web page to test 55
8.6. Configure nginx for dynamic and static separation 56
8.6.33. Modify nginx configuration file 56
8.6.34, test dynamic and static separation 57
8.7. Set up a discuz forum (in apache) 58
8.7.35. Upload the discuz package and extract it to the specified directory 58
8.7.36. Copy the forum to the root of the website 58
8.7.37. Modify php.ini file 58
8.7.38. Restart httpd and open the web page to install discuz 58
8.7.39. Transfer / usr/local/bbs/ and / ust/local/httpd/htdos/bbs on apache to nginx 60
8.8. Apache2 and apache1 configurations are the same 61
8.9. Nginx Optimization 61
8.9.40. Set the number of nginx running processes 61
8.9.41, Nginx runs cpu affinity 4 core 4 thread configuration 61
8.9.42. Set the maximum number of open files for nginx 62
8.9.43, http body optimization 62
8.9.44. Modify the website domain name 63
8.9.45, Fastcgi tuning 63
8.9.46, Gzip tuning 64
8.9.47, log cutting optimization 65
8.9.48, directory file access control 67
8.9.49. Restrictions on access to directories 68
8.9.50, hotlink protection 68
9. LVS+Keepalived load Cluster 69
9.1. LVS environment installation configuration (lvs1/lvs2) 69
9.1.1. Install ipvsadm tools 69
9.2. Installation and configuration of Keepalived environment 69
9.2.2. Install keepalived 69
9.2.3, Keepalived configuration 69
9.2.4. View VIP 72
9.2.5. Test keepalived failover 73
9.2.6. Real server (web server) binds VIP 74
9.2.7. Test the use of VIP to access web pages 75
10. Elk+kafka introduces message queue 76
10.2. Upload package 76
10.3. Install JDK 77
10.4. Install elasticsearch 77
10.4.2. Create user 77
10.4.3, modify system parameters 77
10.4.4. Start elasticsearch 78
10.4.5, Test 78
Install ElasticSearch-head (omitted) 79
10.6. Install Logstash 80
10.7. Install kibana 82
10.7.6. Extract the installation package 82
10.7.7. Compile configuration file 82
10.7.8. Start the service 83
10.7.9, Test 83
Install Filebeat on the collected end to connect with Logstash 86
10.8.10. Upload package 86
10.8.11, decompression package 87
10.8.12. Install jdk 87
10.8.13. Install logstash 87
10.8.14, compile configuration file 88
10.8.15. Configure the configuration file on Logstash 89
10.8.16. Start the filebeat process 90
10.8.17, validation 90
Introducing message queuing ELK+kafka 92
10.9.18. Install nginx 92
10.9.19. Install logstash on nginx (install zookeeper+kafka) 93
10.9.20. Install kafka 94
10.10. Upload package 95
10.11.-install jdk 96
10.12. Decompression package 97
10.13. Generate configuration file 97
10.14. Configure Logstash 98 on kafka (the collected log side)
10.14.21, decompression package 98
10.14.22, generate configuration file 99
10.14.23, Test 99
11 、 Zabbix 101
11.2. Establish a time synchronization environment and build a time synchronization server on zabbix-server
11.2.2. Install NTP (turn off firewall / selinux) 101
11.2.3. Configure NTP 101,
11.2.4. Restart the service and set it to boot 102
11.2.5. Time synchronization on other servers (turn off firewall / selinux) 102
Create zabbix database and authorized users: 102,
11.3.6. Authorization 102
11.3.7. Import database file: 103
11.4. Install Zabbix-Server server
11.4.8. Compile and install zabbix: 103 on zabbix-server
11.4.9. Edit the configuration file and start: 104
11.4.10. View listening port '104
11.4.11. Set up startup script 105
11.4.12. Make a soft connection 105
11.4.13, set up self-startup 105
11.5. Install Zabbix-Web server (compile and install nginx) 105
11.5.14. Nginx installation and optimization 105
11.5.15, php installation 110
11.5.16. Configure the web page of zabbix
11.5.17. View the current system time zone 117
Modify the configuration file to support zabbix 117,
11.5.19. Install zabbix 118
11.5.20. If you want to reassign the address of the database and nginx, you can specify 121 in the configuration file
Zabbix user Management 122,
11.6.21. Change password 122
11.6.22. Create user 124
11.6.23. Click add to join the corresponding group 125
11.6.24. Upload simkai.ttf package, solve Zabbix Chinese garbled code and upload simkai.ttf package 125
11.7. Install Zabbix-Agent side 126
11.7.25. Install on the mha-slave1 host: 126
11.7.26. Edit profile: 127
11.7.27, set as system service, add soft connection 128
11.7.28, set self-startup 128
11.8. Add groups and hosts 128
11.8.29. Create group 128
11.8.30, add host 129
11.8.31. Add template 130
11.8.32. View figure 132
11.9. Monitor designated port 132
11.9.33. Zabbix monitors designated ports (such as mysql ports) 132
11.9.34. Create a template 132
11.9.35. Create monitoring item 134
11.9.36, trigger settings 136
11.9.37. Apply the template to host 138
11.9.38, view the latest data 138
11.9.39, you can also view figure 139
11.9.40, Test Monitoring 139
11.10. Send email to alarm 141
11.10.41, registered mailbox 141
11.10.42. Install mailx software 142
11.10.43. Edit configuration file 142
11.10.44. Send test email 142
11.10.45, check the test email 142
11.10.46. Configure send mail script 143
11.10.47, alarm 144on the Web page
11.10.48, Test 154
11.11. Add php module dynamically
11.11.49. Add bcmath module 157
11.11.50. Add gd module 157,
11.11.51. Add gettext module 158
12. DNS deployment 159,
12.1. Main DNS deployment 159
12.1.1, Experimental Environment 159
12.1.2. Install the required package 159
12.1.3. Edit the main configuration file of DNS
12.1.4. Edit region configuration 159
12.1.5. Configure the main area forward data file 160
12.1.6. Edit reverse area parsing file 160
12.1.7. Check configuration file 160
12.1.8Boot named 161,
12.1.9. Verify positive and negative DNS resolution 161,
12.2. Deploy 161from DNS
12.2.10. Install the required software package 161
12.2.11. Write the main configuration file 161
12.2.12. Prepare a regional document 162
12.2.13. Start the service 162
12.2.14modify the DNS 162that you want to access
12.2.15. Check to see if the slaves folder synchronizes the area files
12.2.16, validation 163
1. Construction project of technical forum
Project profile
At this stage, the company needs to set up a technical forum to provide services, website design requirements to achieve high availability, high load, and add monitoring, backup.
Project topology
Project requirements
1. Use LVS+ keeplive to achieve load balancing
2. Use nginx/hapoxy to realize reverse proxy.
3. Use nginx and apache to realize static and dynamic separation.
4. Use MHA to build mysql cluster
5. Use ceph cluster for distributed storage
6. Set up the discuz forum
7. Set up the domain name of the DNS resolution website
8. Set up an ELK+kafka/redis cluster to collect website logs
9. Use zabbix to monitor the hardware specifications and service ports of each server
10. Backup the mysql database to the ceph cluster
11. Use ansble to deploy nginx, apache, php, zabbix-agent, logstash, nginx and apache,php,mysql in batches
Project host planning
Hostname IP role installation software belongs to the cluster
Ansible 192.168.43.71 ansible ansible NULL
Mha-manager 192.168.43.10 manager mha mha
Mha-master 192.168.43.11
VIP:192.168.43.110 Master Mysql Mha
Mha-slave1 192.168.43.12 Slave1 Mysql Mha
Mha-slave2 192.168.43.13 Salve2 Mysql mha
Ceph0 192.168.43.41 Admin/Osd/mgr ceph ceph
Ceph2 192.168.43.42 Osd,mds ceph ceph
Ceph3 192.168.43.43 Osd,mds ceph ceph
Ceph4 192.168.43.44 client ceph ceph
Nginx1 192.168.43.31 Nginx Nginx,discuz Wed
Nginx2 192.168.43.32 Nginx Nginx,disvuz Wed
Apache1 192.168.43.33 Apache Apache,php,discuz Wed
Apache2 192.168.43.34 Apache Apache,php,discuz Wed
LVS DIP:192.168.43.60
VIP:192.168.43.100 MASTER ipvsadm 、 keepalived LVS
LVS DIP:192.168.43.61
VIP:192.168.43.100 BACKUP ipvsadm 、 keepalived LVS
Elk 192.168.43.80 elk Elasticsearch 、 jdk 、 node 、 elasticsearch-head-master 、 kibana 、 logstash Elk
Filebeat 192.168.43.81 Filebeat Filebeat 、 kibana 、 jdk elk
Logstash 192.168.43.82 logstash Logstash 、 jdk elk
Zabbix-server 192.168.43.20 Zabbix-server zabbix-3.2.6.tar.gz Zabbix
Dns1 192.168.43.50 Master DNS Bind DNS
Dns2 192.168.43.51 standby DNS Bind DNS
Ansible Automated deployment Services 2.0
Install ansible
[root@ansible ~] # vim / etc/yum.repos.d/renzeizuofu.repo
[ansib]
Name=ansible
Baseurl= file:///root/ansible
Enabled=1
Gpgcheck=0
[root@ansible ~] # ls
Ansible.tar.gz
[root@ansible ~] # tar-zxf ansible.tar.gz
[root@ansible ~] # yum-y install ansible
Configure secret-free login
[root@ansible ~] # ssh-keygen
[root@ansible ~] # ssh-copy-id 192.168.10.1
[root@ansible ~] # ssh-copy-id 192.168.10.2
[root@ansible ~] # ssh-copy-id 192.168.10.3
[root@ansible ~] # ssh-copy-id 192.168.10.4
[root@ansible ~] # ssh-copy-id 192.168.10.5
Create a task directory
[root@ansible ~] # mkdir-p / etc/ansible/GspTest/ {nginx,apache,php,zabbix-agent,logstash} / {tasks,files,templates,vars,meta,default,handlers}
Batch deployment of apache
Package the apache installed on this server
[root@ansible ~] # cd / etc/ansible/GspTest/apache/
[root@ansible apache] # ls files/
Apache.sh apache.tar.gz
Build apache task
[root@ansible apache] # vim tasks/main.yml
Name: input apache
Copy: src=apache.tar.gz dest=/root/ force=yes owner=root mode=0755name: input shell
Copy: src=apache.sh dest=/root/ force=yes owner=root mode=0755name: execute shell
Shell: / bin/bash / root/apache.shname: delete
Shell: rm-rf / root/apache.tar.gz / root/apache.sh
Auxiliary installation script
[root@ansible apache] # vim files/apache.sh
#! / bin/bash
ApacheDir=/usr/local/httpd
ApacheDir=/usr/local/httpd
Yum-y remove httpd & > / dev/null
D=find $(echo $PATH | awk-F':'{print $1, 2, 3, 4})-name httpd | wc-l
If [$D-eq 0]; then
Cd / root & & tar-zxf apache.tar.gz-C / usr/local/ &
Sed-I '1a#chkconfig: 2345 11 88' $ApacheDir/bin/apachectl & & rm-rf / etc/init.d/httpd & &
Cp-- force $ApacheDir/bin/apachectl / etc/init.d/httpd &
Chkconfig-- add httpd & & chkconfig httpd on & & rm-rf / root/apache.tar.gz / root/apache.sh & & ln-s / usr/local/httpd/bin/ / usr/bin/ & > / dev/null
Else
For i in {0.. 20}; do echo "You has an apache,please remove" > > / dev/pts/$i; done
Fi
Batch deployment _ Nginx
Package the nginx installed on this server
[root@ansible ~] # cd / etc/ansible/GspTest/nginx/
[root@ansible GspTest] # ls nginx/files/
Nginx.sh nginx.tar.gz
Build httpd task
[root@ansible nginx] # vim tasks/main.ymlname: input nginx
Copy: src=nginx.tar.gz dest=/root force=yesname: input shell
Copy: src=nginx.sh dest=/root force=yesname: install nginx
Shell: / bin/bash / root/nginx.shname: start nginx
Service: name=nginx state=startedname: delete
Shell: rm-rf / root/nginx.tar.gz / root/nginx.sh
Auxiliary installation script
[root@ansible nginx] # vim files/nginx.sh
#! / bin/bash
NginxDir=/usr/local/nginx
D=find $(echo $PATH | awk-F':'{print $1, 2, 3, 4})-name nginx | wc-l
If [$D-eq 0]; then
Cd / root & & tar-zxf nginx.tar.gz-C / usr/local/ & & rm-rf nginx.tar.gz
Cat > / etc/init.d/nginx > / dev/pts/$i; done
Fi
Batch deployment of PHP
[root@ansible ~] # cd / etc/ansible/GspTest/php
[root@ansible php] # ls files/
Libmcrypt-2.5.7.tar.gz php-5.6.36.tar.gz php.sh
[root@ansible php] # vim tasks/main.yml
Name: copy tar
Copy: src=php-5.6.36.tar.gz dest=/root/name: copy tar
Copy: src=libmcrypt-2.5.7.tar.gz dest=/root/name: copy php.sh
Copy: src=php.sh dest=/tmpname: bash php.sh
Shell: / bin/bash / tmp/php.sh
[root@ansible php] # vim files/php.sh
#! / bin/bash
Cd / root
Libmcryp=$ (ls libmcrypt-.tar.gz)
Php=$ (ls php-.tar.gz)
Apache= "/ usr/local/httpd"
Paid $(ls $libmcryp | wc-l)
If [$A-gt 0]; then
Tar zxf $libmcryp-C / usr/local/src
Dating $(ls-1 / usr/local/src/ | grep ^ d | grep libmcryp | awk'{print $9}')
Cd / usr/local/src/$D
. / configure-- prefix=/usr/local/libmcrypt & & make & & make install & & cd / root
Else
For i in {0... 20}; do echo "libmcryp package does not exist!" > > / dev/pts/$i; done
Fi
Bounded $(ls $php | wc-l)
If [$B-gt 0]; then
Tar zxf $php-C / usr/local/src/
EBay $(ls-1 / usr/local/src/ | grep ^ d | grep php | awk'{print $9}')
Cd / usr/local/src/$E
/ configure-- prefix=/usr/local/$E-- with-mysql=mysqlnd-- with-pdo-mysql=mysqlnd-- with-mysqli=mysqlnd-- with-openssl-- enable-fpm-- enable-sockets-- enable-sysvshm-- enable-mbstring-- with-freetype-dir-- with-jpeg-dir-- with-png-dir-- with-zlib-- with-libxml-dir=/usr-- enable-xml-with-mhash-- with-mcrypt=/usr/local/libmcrypt-- with-config-file-path=/ Etc-with-config-file-scan-dir=/usr/local/php5.6/etc/-with-bz2-enable-maintainer-zts-with-apxs2=/usr/local/httpd/bin/apxs &
For i in {0.. 20}; do echo "PHP,Configure OK,now makefile" > > / dev/pts/$i; done
Make & &
Make install & &
For i in {0... 20}; do echo "PHP install complete!!" > / dev/pts/$i; done
Rm-rf / usr/local/$E/etc/php.ini & & cp php.ini-production / usr/local/$E/etc/php.ini
Else
For i in {0... 20}; do echo "PHP package does not exist!" > > / dev/pts/$i; done
Fi
If [$?-eq 0]; then
Cd $apache
YourIP=$ (ifconfig | sed-n 2p | awk-F'{print $2}')
Sed-I'/ DirectoryIndex index.html/s/index.html/index.html index.php/g' conf/httpd.conf
Sed-I'/\ / a\ AddType application/x-httpd-php .php .phtml 'conf/httpd.conf
Sed-I "/ ServerName www.example.com/a\ ServerName $YourIP:80" conf/httpd.conf
Echo "" > / usr/local/httpd/htdocs/index.php
Systemctl restart httpd
Fi
Batch deployment of MySQL
[root@ansible ~] # cd / etc/ansible/GspTest/mysql/
[root@ansible mysql] # ls files/
Mysql.sh mysql.tar.gz
[root@ansible mysql] # vim tasks/main.yml
Name: input mysql
Copy: src=mysql.tar.gz dest=/root force=yes owner=root mode=0755name: input shell script
Copy: src=mysql.sh dest=/root force=yes owner=root mode=0755name: install mysql
Shell: / bin/bash / root/mysql.shname: start mysql
Service: name=mysqld state=startedname: delete
Shell: rm-rf / root/mysql.tar.gz / root/mysql.sh
Auxiliary installation script
[root@ansible mysql] # vim files/mysql.sh
#! / bin/bash
Basedir=/usr/local/mysql
Datadir=/data/mysql/data
Log=/data/mysql/log
Yum-y remove boost mariadb & > / dev/null
If [rpm-qa mariadb* boost* | wc-l-eq 0]; then
Cd / root & & tar-zxf mysql.tar.gz-C / usr/local/ & & touch / etc/my.cnf & & mkdir-p $basedir / data/mysql/ {data,log} & & useradd-M-s / sbin/nologin mysql & > / dev/null & & chown-Rf mysql:mysql $basedir / data/mysql/ etc/my.cnf & & cat > / etc/my.cnf / dev/null & chkconfig-- add mysqld & & chkconfig mysqld on
Fi
If [$?-eq 0]; then
/ usr/local/mysql/bin/mysqld-initialize-insecure-user=mysql-basedir=$basedir-datadir=$datadir & & echo $?
Fi
Batch deployment of logstash
[root@ansible ~] # cd / etc/ansible/GspTest/logstash/
[root@ansible logstash] # ls files/
Logstash-6.4.2.tar.gz
[root@ansible logstash] # vim tasks/main.ymlname: logstash
Copy: src=logstash-6.4.2.tar.gz dest=/root force=yes owner=root mode=755name: install logstash
Shell: tar-zxf / root/logstash-6.4.2.tar.gz-C / usr/local/name: delete
Shell: rm-rf / root/logstash-6.4.2.tar.g
Batch deployment of zabbix-agent
[root@ansible zabbix-agent] # ls files/
Zabbix-4.2.4.tar.gz zabbix_agentd
[root@ansible zabbix-agent] # vim tasks/main.ymlname: input zabbix-agent
Copy: src=zabbix-4.2.4.tar.gz dest=/root force=yesname: install zabbix-agent
Shell: tar-zxf / root/zabbix-4.2.4.tar.gz-C / usr/localname: starting up
Copy: src=zabbix_agentd dest=/etc/init.d/name: delete
Shell: rm-rf / root/zabbix.sh / root/zabbix-4.2.4.tar.gz
Script site.yml
[root@ansible ~] # cd / etc/ansible/GspTest/
[root@ansible GspTest] # vim site.yml
Hosts: services # the name here is the name in the host list hosts
Remote_user: root
Roles:
#-apache
#-nginx
#-php
#-mysql
#-zabbix_agent
#-logstash
According to the requirements, change the list of hosts hosts and adjust the service items of site.yml. The pound sign'#'is not installed, and the pound sign is installed.
Deploy discuz
Create a mysql test account for discuz
[root@mha-master] # mysql-uroot-p123456-p 192.168.43.110
Mysql > grant all on. To test@'%' identified by '123456'
Mysql > flush privileges
Upload package to root directory
[root@ansible ~] # ls
Discuz_7.2_FULL_SC_UTF8.zip
Create script
Create a discuz installation script, which is recommended to be executed directly on the server
[root@ansible ~] # vim / etc/ansible/discuz.sh
#! / bin/bash
# directory of configuration files for PHP
Phpconf=find /-name php.ini
# website directory
Wz='/usr/local/nginx/html'
# decompress discuz
Unzip Discuz_7.2_FULL_SC_UTF8.zip-d / usr/local/bbs
Cd / usr/local/bbs
# copy files to the website directory
Cp-r upload/ $wz/bbs
# set php support
Sed-I 's/short_open_tag\ = Off/short_open_tag\ = On/' $phpconf
# Grant write permission to forum directory
Chmod-R 777$ wz/bbs/
# restart php
/ etc/init.d/php-fpm restart
Create a call script
[root@ansible ~] # vim andiscuz.sh
#! / bin/bash
# the name of the discuz package to be installed
Discuz='Discuz_7.2_FULL_SC_UTF8.zip'
# copy the package to the specified server
Ansible-I / etc/ansible/hosts web-m copy-a "src=/root/$Discuz dest=/root"
# run the discuz installation script
Ansible-I / etc/ansible/hosts web-m script-a "/ etc/ansible/discuz.sh"
Run the script
[root@ansible ~] # sh andiscuz.sh
Deploy MHA
Experimental environment
Host configuration.
Configure hostname mapping.
[root@mha-manager ~] # vim / etc/hosts
192.168.43.10 mha-manager
192.168.43.11 mha-master
192.168.43.12 mha-slave1
192.168.43.13 mha-slave2
Configure secret-free authentication (all nodes)
[root@mha-manager] # ssh-keygen-t rsa
[root@mha-manager ~] # ssh-copy-id 192.168.43.10
[root@mha-manager ~] # ssh-copy-id 192.168.43.11
[root@mha-manager ~] # ssh-copy-id 192.168.43.12
[root@mha-manager ~] # ssh-copy-id 192.168.43.13
Upload the package.
[root@mha-manager ~] # ls
Mha4mysql-manager-0.57-0.el7.noarch.rpm
Mha4mysql-node-0.57-0.el7.noarch.rpm
Mhapath.tar.gz
Yum installation software dependency package
Install the software dependency package
[root@mha-manager ~] # vim / etc/yum.repos.d/mhapath.repo
[mha]
Name=mhapath
Baseurl= file:///root/mhapath
Enabled=1
Gpgchack=0
[root@mha-manager ~] # tar-zxvf mhapath.tar.gz
Configure local yum
[root@mha-manager ~] # vim / etc/yum.repos.d/centos7.repo
[centos7]
Name=centos7
Baseurl= file:///mnt
Enabled=1
Gpgcheck=0
[root@mha-manager ~] # ls / etc/yum.repos.d/
Centos7.repo mhapath.repo
[root@mha-manager ~] # tar-zxvf mhapath.tar.gz
[root@mha-manager ~] # mount / dev/sr0 / mnt
Copy software packages and yum configuration files to other nodes
[root@mha-manager] # scp-r / etc/yum.repos.d/ 192.168.43.11:/etc/yum.repos.d/
[root@mha-manager] # scp-r / etc/yum.repos.d/ 192.168.43.12:/etc/yum.repos.d/
[root@mha-manager] # scp-r / etc/yum.repos.d/ 192.168.43.13:/etc/yum.repos.d/
[root@mha-manager] # scp-r / root/mha4mysql-node-0.57-0.el7.noarch.rpm 192.168.43.11:/root
[root@mha-manager] # scp-r / root/mha4mysql-node-0.57-0.el7.noarch.rpm 192.168.43.12:/root
[root@mha-manager] # scp-r / root/mha4mysql-node-0.57-0.el7.noarch.rpm 192.168.43.13:/root
[root@mha-manager] # scp-r mhapath 192.168.43.11:/root
[root@mha-manager] # scp-r mhapath 192.168.43.12:/root
[root@mha-manager] # scp-r mhapath 192.168.43.13:/root
Install software dependencies and software packages on each node node
[root@mha-manager] # yum-y install perl-DBD-MySQL perl-Config-Tiny perl-Log- Dispatch perl-Parallel-ForkManager-- skip-broken-nogpgcheck
[root@mha-manager] # rpm-ivh mha4mysql-node-0.57-0.el7.noarch.rpm
[root@mha-slave2 ~] # cd / usr/bin
[root@mha-slave2 bin] # ll app filter purge save
-rwxr-xr-x 1 root root 16381 May 31 2015 apply_diff_relay_logs
-rwxr-xr-x. 1 root root 46256 June 10 2014 filterdiff
-rwxr-xr-x 1 root root 4807 May 31 2015 filter_mysqlbinlog
-rwxr-xr-x 1 root root 8261 May 31 2015 purge_relay_logs
-rwxr-xr-x 1 root root 7525 May 31 2015 save_binary_logs
Install MHA Manager
Install perl modules that MHA Manger depends on
[root@mha-manager] # yum install perl-DBD-MySQL perl-Config-Tiny perl-Log-Dispatch perl-Parallel-ForkManager perl-Time-HiRes perl-ExtUtils-CBuilder perl-ExtUtils-MakeMaker perl-CPAN-y-- nogpgcheck
Install the MHA Manager package:
[root@mha-manager] # rpm-ivh mha4mysql-manager-0.57-0.el7.noarch.rpm
The following script files are generated under the / usr/bin directory after the installation is complete:
Build a master-slave replication environment
Configure the primary database server in mha-master
Configure my.cnf:
[root@mha-master ~] # vim / etc/my.cnf
Log-bin=mysql-bin-master # enable binary logging
Server-id=1 # Native database ID marking
Binlog-ignore-db=mysql # libraries that cannot be copied from the server
[root@mha-master ~] # systemctl restart mysqld
Create a database that requires synchronization:
[root@mha-master] # mysql-uroot-p123456
[root@ mha-master] # mysql-uroot-p123456
Mysql > create database HA
Mysql > use HA
Mysql > create table test (id int,name varchar (20))
Authorization
Repl users are used for master-slave synchronization and root users are used for MHA
Mysql > grant replication slave on. To repl@'192.168.43.%' identified by '123456'
Mysql > grant all privileges on. To 'root'@'192.168.43.%' identified by' 123456'
Mysql > flush privileges; # refresh permissions
Configure the slave service in mha-slave1:
Configure my.cnf:
[root@mha-slave1 ~] # vim / etc/my.cnf
Log-bin=mysql-slave1 # enable binary logging
Server-id=2 # Native database ID marking
Binlog-ignore-db=mysql # libraries that cannot be copied from the server
[root@mha-slave1 ~] # systemctl restart mysqld
Authorization
[root@mha-slave1] # mysql-uroot-p123456
Mysql > grant replication slave on. To 'repl'@'192.168.43.%' identified by' 123456'
Mysql > grant all privileges on. * to 'root'@'192.168.43.%' identified by' 123456'
Mysql > flush privileges; # refresh permissions
Establish a master-slave relationship
Mysql > stop slave
Mysql > change master to master_host='192.168.43.11',master_user='repl',master_password='123456'
Mysql > start slave
Mysql > show slave status\ G
S
Configure the slave service in mah-slave2:
Configure my.cnf:
[root@mha-slave2 ~] # vim / etc/my.cnf
Log-bin=mysql-slave1 # enable binary logging
Server-id=3 # Native database ID marking
Binlog-do-db=HA # libraries that can be copied from the server. Binary database name that needs to be synchronized
Log_slave_updates=1 # only when log_slave_updates is enabled, the slave library binlog will record the operation log of master database synchronization.
[root@mha-slave2 ~] # systemctl restart mysqld
[root@ mha-slave2~] # mysql-uroot-p123456
Mysql > grant replication slave on. To 'repl'@'192.168.43.%' identified by' 123456'
Mysql > grant all privileges on. To 'root'@'192.168.43.%' identified by' 123456'
Mysql > flush privileges; # refresh permissions
Establish a master-slave relationship
Mysql > stop slave
Mysql > change master to master_host='192.168.43.11',master_user='repl',master_password='123456'
Mysql > start slave
Mysql > show slave status\ G
Two slave servers set up read_only
The external reading service is provided from the library, so it is not written into the configuration file because slave can be upgraded to master at any time.
[root@mha-slave1] # mysql-uroot-p123456-e 'set global read_only=1
[root@mha-slave2] # mysql-uroot-p123456-e 'set global read_only=1
Sets how relay log is cleared (on each slave node)
[root@mha-slave1] # mysql-uroot-p123456-e 'set global relay_log_purge=0'
[root@mha-slave2] # mysql-uroot-p123456-e 'set global relay_log_purge=0'
Configure MHA
Create a working directory for MHA and create related configuration files
[root@mha-manager] # mkdir-p / etc/masterha
[root@mha-manager] # mkdir-p / var/log/masterha/app1
[root@mha-manager ~] # vim / etc/masterha/app1.cnf
[server default]
Manager_log=/var/log/masterha/app1/manager.log
Manager_workdir=/var/log/masterha/app1
Master_binlog_dir=/usr/local/mysql/data # own data directory of mysql, and the three master and slave data directories should be the same.
Password=123456
Remote_workdir=/tmp
Repl_password=123456
Repl_user=repl
[server default]
Manager_log=/var/log/masterha/app1/manager.log
Manager_workdir=/var/log/masterha/app1
Master_binlog_dir=/data/mysql/data
Password=123456
Remote_workdir=/tmp
Repl_password=123456
Repl_user=repl
Ssh_user=root
User=root
Master_ip_failover_script=/usr/bin/master_ip_failover # start script
[server1]
Hostname=192.168.43.11
Port=3306
[server2]
Candidate_master=1
Check_repl_delay=0
Hostname=192.168.43.12
Port=3306
[server3]
Hostname=192.168.43.13
Port=3306
Write script / usr/bin/master_ip_failover, know perl scripting language
[root@mha-manager ~] # vim / usr/bin/master_ip_failover
#! / usr/bin/env perl
Use strict
Use warnings FATAL = > 'all'
Use Getopt::Long
My (
$command, $ssh_user, $orig_master_host, $orig_master_ip
$orig_master_port, $new_master_host, $new_master_ip, $new_master_port
);
My $vip = '192.168.43.110 Universe 24'
My $key ='1'
My $ssh_start_vip = "/ sbin/ifconfig ens33:$key $vip"
My $ssh_stop_vip = "/ sbin/ifconfig ens33:$key down"
GetOptions (
'command=s' = >\ $command
'ssh_user=s' = >\ $ssh_user
'orig_master_host=s' = >\ $orig_master_host
'orig_master_ip=s' = >\ $orig_master_ip
'orig_master_port=i' = >\ $orig_master_port
'new_master_host=s' = >\ $new_master_host
'new_master_ip=s' = >\ $new_master_ip
'new_master_port=i' = >\ $new_master_port
);
Exit & main ()
Sub main {
Print "\ n\ nIN SCRIPT TEST====$ssh_stop_vip==$ssh_start_vip===\ n\ n"; if ($command eq "stop" | | $command eq "stopssh") {my $exit_code = 1; eval {print "Disabling the VIP on old master: $orig_master_host\ n"; & stop_vip (); $exit_code = 0;}; if ($@) {warn "Got Error: $@\ n"; exit $exit_code } exit $exit_code;} elsif ($command eq "start") {my $exit_code = 10; eval {print "Enabling the VIP-$vip on the new master-$new_master_host\ n"; & start_vip (); $exit_code = 0;}; if ($@) {warn $@; exit $exit_code;} exit $exit_code } elsif ($command eq "status") {print "Checking the Status of the script.. OK\ n "; # `ssh $ssh_user\ @ cluster1\" $ssh_start_vip\ "`; exit 0;} else {& usage (); exit 1;}
}
[root@mha-manager ~] # chmod + x / usr/bin/master_ip_failover
Set VIP address (master)
[root@mha-master ~] # ifconfig ens33:1 192.168.43.110 netmask 255.255.255.0 up
Check SSH configuration
[root@mha-manager] # masterha_check_ssh-- conf=/etc/masterha/app1.cnf
Check the status of the entire cluster replication environment
[root@ mha-manager ~] # rm-rf / var/log/masterha/app1/app1.master_status.health
[root@mha-manager] # masterha_check_repl-- conf=/etc/masterha/app1.cnf
Soft connection is required: ln-s / usr/local/mysql/bin/* / usr/local/bin
Enable MHA Manager monitoring
[root@ mha-manager ~] # nohup masterha_manager-- conf=/etc/masterha/app1.cnf\
-remove_dead_master_conf-ignore_last_failover
< /dev/null >\
/ var/log/masterha/app1/manager.log 2 > & 1 &
[1] 22627
Check whether the MHA Manager monitoring is normal.
[root@mha-manager] # masterha_check_status-- conf=/etc/masterha/app1.cnf
App1 (pid:23108) is running (0:PING_OK), master:192.168.43.11
View startup status
[root@mha-manager ~] # tail-N20 / var/log/masterha/app1/manager.log
Open a new window to view the log
[root@mha-manager] # tail-f / var/log/masterha/app1/manager.log
Ceph cluster
Environmental preparation:
Add a 20g hard disk to the first three servers
Turn off selinux, turn off firewall: (all nodes)
Systemctl stop firewalld # turn off the firewall
Setenforce 0 # disable Protection
Configure hosts file: (all nodes)
Vim / etc/hosts
192.168.43.40 ceph0
192.168.43.41 ceph2
192.168.43.43 ceph3
192.168.43.44 ceph4
Configure ssh secret-free login (management node)
Ssh-keygen # always enter and keep it secret.
Ssh-copy-id ceph0
Ssh-copy-id ceph2
Ssh-copy-id ceph3
Ssh-copy-id ceph4
Upload the software package and extract it
[root@ceph0 ~] # tar-zxvf ceph-12.2.12.tar.gz
Yum source configuration:
[root@ceph0 ~] # mount / dev/cdrom / mnt # Mount CD
[root@ceph0 ~] # vim / etc/yum.repos.d/a.repo
[a]
Name=a
Baseurl= file:///mnt/
Gpgcheck=0
Enable=1
[root@ceph0 ~] # vim / etc/yum.repos.d/ceph.repo
[ceph]
Name=ceph
Baseurl= file:///root/ceph
Enabled=1
Gpgcheck=0
Copy the ceph package and related files to cong12,cong13:
[root@ceph0] # scp-r ceph ceph2:/root
[root@ceph0] # scp-r ceph ceph3:/root
[root@ceph0] # scp-r / etc/yum.repos.d/ceph.repo ceph2:/etc/yum.repos.d/
[root@ceph0] # scp-r / etc/yum.repos.d/ceph.repo ceph3:/etc/yum.repos.d/
Deploy ceph on ceph0,ceph2,ceph3 and install dependency tools:
[root@ceph0 ~] # yum install-y ceph-deploy ceph ceph-radosgw snappy leveldb gdisk python-argparse gperftools-libs
Deployment of Ceph0:
To create a monitor service:
Deploy mon on ceph2,ceph3 at the same time to achieve high availability
[root@ceph0 ~] # cd / etc/ceph/
[root@ceph0 ceph] # ceph-deploy new ceph0
Number of modified copies:
[root@ceph0 ceph] # cd / etc/ceph/
[root@ceph0 ceph] # vim ceph.conf
[global]
Fsid = 73d389f3-b720-453f-a50b-fde0f69a0eb3
Mon_initial_members = ceph0
Mon_host = 192.168.43.40
Auth_cluster_required = cephx
Auth_service_required = cephx
Auth_client_required = cephx
Osd_pool_default_size = 2 # add to the last line
Install ceph monitor:
[root@ceph0 ceph] # ceph-deploy mon create ceph0
Collect the keyring file for the node:
[root@ceph0 ceph] # ceph-deploy gatherkeys ceph0
[root@ceph0 ceph] # ls
[root@ceph0 ceph] # cat ceph.client.admin.keyring
[client.admin]
Key = AQCrAENd5+FbNxAAtWNYcU0KNMMN/xahK7uu8A==
Deploy the osd service:
Use ceph to automatically partition:
[root@ceph0 ceph] # ceph-deploy disk zap ceph0 / dev/sdb
[root@ceph0 ceph] # ceph-deploy disk zap ceph2 / dev/sdb
[root@ceph0 ceph] # ceph-deploy disk zap ceph3 / dev/sdb
Add an osd node:
[root@ceph0 ceph] # ceph-deploy osd create ceph0-- data / dev/sdb
[root@ceph0 ceph] # ceph-deploy osd create ceph2-- data / dev/sdb
[root@ceph0 ceph] # ceph-deploy osd create ceph3-- data / dev/sdb
View status:
[root@ceph0 ceph] # ceph-deploy osd list ceph0 ceph2 ceph3
Deploy mgr Management Services
[root@ceph0 ceph] # ceph-deploy mgr create ceph0
Unified cluster configuration
[root@ceph0 ceph] # ceph-deploy admin ceph0 ceph2 ceph3 ceph4
If ceph4 reports an error, add [root@ceph4 ~] # mkdir / etc/ceph to ceph4
Modify ceph.client.admin.keyring permissions for each node
[root@ceph0 ceph] # chmod + r / etc/ceph/ceph.client.admin.keyring
[root@ceph2 ~] # chmod + r / etc/ceph/ceph.client.admin.keyring
[root@ceph3 ~] # chmod + r / etc/ceph/ceph.client.admin.keyring
[root@ceph4 ~] # chmod + r / etc/ceph/ceph.client.admin.keyring
Deploy mds services
Install mds
[root@ceph0 ceph] # ceph-deploy mds create ceph2 ceph3
View mds services
[root@ceph0 ceph] # ceph mds stat
, 2 up:standby
View cluster status
[root@ceph0 ceph] # ceph-s
Create a ceph file system
View the file system before creation
[root@ceph0 ceph] # ceph fs ls
No filesystems enabled
Create a storage pool
[root@ceph0 ceph] # ceph osd pool create cephfs_data 128
Pool 'cephfs_data' created
[root@ceph0 ceph] # ceph osd pool create cephfs_metadata 128
Pool 'cephfs_metadata' created
Create a file system
[root@ceph0 ceph] # cephfs new cephfs cephfs_metadata cephfs_data
New fs with metadata pool 2 and data pool 1
View the ceph file system
[root@ceph0 ceph] # ceph fs ls
Name: cephfs, metadata pool: cephfs_metadata, data pools: [cephfs_data]
View mds node status
[root@ceph0 ~] # ceph mds stat
Cephfs-1/1/1 up {0=ceph3=up:active}, 1 up:standby
Kernel driver mounts Ceph file system
Create a mount point
[root@ceph4 ~] # mkdir / media/aa
[root@ceph0 ~] # cat / etc/ceph/ceph.client.admin.keyring
[client.admin]
Key = AQAzN0NdWGpaJxAAyulOHB/+r2OH55yEm2w1NA==
Mount using key
[root@ceph4] # mount-t ceph 192.168.43.41 media/aa / media/aa-o name=admin,secret=AQAzN0NdWGpaJxAAyulOHB/+r2OH55yEm2w1NA==
[root@ceph0] # df-h
Mount using a key file
Copy the ceph package and yum configuration file to cong14
[root@ceph0] # scp-r ceph ceph4:/root
[root@ceph0] # scp-r ceph ceph4:/rootscp / etc/yum.repos.d/ceph-package.repo ceph4:/etc/yum.repos.d/
Install ceph-common-12.2.12
[root@ceph4 ~] # yum install-y ceph-common-12.2.12
Create a key file
[root@ceph5~] # vim / etc/ceph/admin.secret
Key = AQAzN0NdWGpaJxAAyulOHB/+r2OH55yEm2w1NA==
The use of RBD
Detect whether the liunx kernel supports RBD
[root@ceph0 ~] # modprobe rbd
Create a rbd storage pool
[root@ceph0 ~] # ceph osd pool create rbd 64
Pool 'rbd' created
Create a block device of the specified size
[root@ceph0] # rbd create-- size 102400 rbd/test1
View test1 information
[root@ceph0 ~] # rbd info test1
Map into the kernel
[root@ceph0 ~] # rbd feature disable test1 object-map fast-diff deep-flatten exclusive-lock
[root@ceph0 ~] # rbd map test1
/ dev/rbd0
[root@ceph0 ~] # ls / dev/rbd0
/ dev/rbd0
Mounting use
[root@ceph0 ~] # mkdir / media/cephrbd
Format Partition
[root@ceph0 ~] # mkfs.xfs / dev/rbd0
Mounting
[root@ceph0 ~] # mount / dev/rbd0 / media/cephrbd/
Nginx+apache static and dynamic separation
Deploy nginx
Upload the required software package and extract it to the specified directory, which needs to be installed with the rz command. (nginx1)
[root@nginx1 ~] # yum-y install lrzsz
[root@nginx1 ~] # rz
[root@nginx1 ~] # ls
[root@nginx1] # tar-zxf nginx-1.10.3.tar.gz-C / usr/local/src/
Hidden version number
[root@nginx1 ~] # cd / usr/local/src/nginx-1.10.3/ enter the decompression directory
[root@nginx1 nginx-1.10.3] # vim src/core/nginx.h / / modify the mark
[root@nginx1 nginx-1.10.3] # vim src/http/ngx_http_header_filter_module.c / / modify the red mark to prevent echo
Install the nginx dependency package
[root@nginx1 nginx-1.10.3] # yum install-y gcc gcc-c++ autoconf automake zlib zlib-devel openssl openssl-devel pcre pcre-deve
Pre-compilation
[root@nginx1 nginx-1.10.3] # / configure-- prefix=/usr/local/nginx-- with-http_dav_module-- with-http_stub_status_module-- with-http_addition_module-- with-http_sub_module-- with-http_flv_module-- with-http_mp4_module-- with-pcre
Compilation and installation
[root@nginx1 nginx-1.10.3] # make & & make install
Start to view nginx port
[root@nginx1 ~] # / usr/local/nginx/sbin/nginx
[root@nginx1 ~] # netstat-antup | grep 80
Check the version number and test the web page
[root@nginx1] # curl-I 192.168.43.31
Modify the nginx running account
[root@nginx1 ~] # ps-ef | grep nginx
[root@nginx1 ~] # useradd-M-s / sbin/nologin nginx / / create a nginx account
[root@nginx1 ~] # vim / usr/local/nginx/conf/nginx.conf / / modify the mark
[root@nginx1 ~] # ln-s / usr/local/nginx/sbin/* / usr/local/bin/ set soft connection
[root@nginx1 ~] # nginx-s reload / / overload nginx
[root@nginx1 ~] # ps-aux | grep nginx / / View the nginx running account
Generate nginx startup script
[root@nginx1 ~] # vim / etc/init.d/nginx
#! / bin/bash
Chkconfig:-99 2description: Nginx Service Control Script
PROG= "/ usr/local/nginx/sbin/nginx"
PIDF= "/ usr/local/nginx/logs/nginx.pid"
Case "$1" in
Start)
$PROG
Stop)
Kill-3 $(cat $PIDF)
Restart)
$0 stop & > / dev/null
If [$?-ne 0]; then continue; fi
$0 start
Reload)
Kill-1 $(cat $PIDF)
)
Echo "Userage: $0 {start | stop | restart | reload}"
Exit 1
Esac
Exit 0
[root@nginx1 ~] # chmod + x / etc/init.d/nginx / / give script executable permission
Configuration service boot self-startup
[root@nginx1 ~] # chkconfig-- add nginx / / add script as a system service
[root@nginx1 ~] # chkconfig nginx on / / add nginx as self-boot
[root@nginx1 ~] # chkconfig-- list nginx / / View boot entry
Deploy apache (both are required)
Upload the required software package and extract it to the specified directory
[root@apache1 ~] # rz
[root@apache1 ~] # ls
[root@apache1] # tar-zxf httpd-2.4.38.tar.gz-C / usr/local/src/
Hide version information
[root@apache1 ~] # cd / usr/local/src/httpd-2.4.38/ enter the decompression directory
[root@apache1 httpd-2.4.38] # vim include/ap_release.h / / modify the mark
Install the apache dependency package
[root@apache1 httpd-2.4.38] # yum-y install gcc gcc-c++ apr apr-devel cyrus-sasl-devel expat-devel libdb-devel openldap-devel apr-util-devel apr-util pcre-devel pcre openssl
If so, delete the previous httpd.
[root@apache1 httpd-2.4.38] # yum-y remove httpd
Precompile & compile and install
[root@apache1 httpd-2.4.38] # / configure-- prefix=/usr/local/httpd-- enable-so-- enable-rewrite-- enable-charset-lite-- enable-cgi-- enable-ssl-- enable-mpms-shared=all
[root@apache1 httpd-2.4.38] # make & & make install
Modify the main configuration file to speed up apache restart
[root@apache1 httpd-2.4.38] # vim / usr/local/httpd/conf/httpd.conf / / modify the mark
Set up the apache startup script
[root@apache1 httpd-2.4.38] # cp / usr/local/httpd/bin/apachectl / etc/init.d/httpd
Copy the script to init.d/httpd
[root@apache1 httpd-2.4.38] # vim / etc/init.d/httpd / / add the following under the first line
# chkconfig: 2345 11 88
# despriction: Web Site
[root@apache1 httpd-2.4.38] # ll / etc/init.d/httpd / / View script permissions, and add executable permissions if there is no permission
Set the script to boot automatically
[root@apache1 httpd-2.4.38] # chkconfig-- add httpd
[root@apache1 httpd-2.4.38] # chkconfig-- list httpd / / View boot entry
Start the service and set up a soft connection
[root@apache1 httpd-2.4.38] # systemctl start httpd
[root@apache1 httpd-2.4.38] # ln-s / usr/local/httpd/bin/* / usr/local/bin/
Insert the card version number to test the access to the web page.
[root@apache1 httpd-2.4.38] # curl-I 192.168.43.33
Modify the apache running account
[root@apache1 ~] # ps-axu | grep httpd
Create an apache user
[root@apache1] # useradd-M-s / sbin/nologin apache
Edit the configuration file to modify the default user
[root@apache1 ~] # vim / usr/local/httpd/conf/httpd.conf / / modify the following
Restart the httpd service to view the running account
[root@apache1 ~] # systemctl restart httpd
[root@apache1 ~] # ps-axu | grep httpd
Build php on apache
Upload the libmcrypt,php package and extract it to the specified directory
[root@apache1 ~] # rz
[root@apache1 ~] # ls
Install the php dependency package
[root@apache1 ~] # yum-y install php-mcrypt libmcrypt libmcrypt-devel autoconf freetype gd libmcrypt libpng libpng-devel libjpeg libxml2 libxml2-devel zlib curl curl-devel re2c libmcrypt-devel freetype-devel libjpeg-devel bzip2-devel
[root@apache1] # tar-zxf php-5.6.36.tar.gz-C / usr/local/src/
[root@apache1 ~] # tar-zxf libmcrypt-2.5.7.tar.gz
Go to the libmcrypt decompression directory, compile and install
[root@apache1 ~] # cd libmcrypt-2.5.7/
[root@apache1 libmcrypt-2.5.7] # / configure-- prefix=/usr/local/libmcrypt & & make & & make install
PHP precompilation
[root@apache1 libmcrypt-2.5.7] # cd / usr/local/src/php-5.6.36/
[root@apache1 php-5.6.36] # / configure-- prefix=/usr/local/php5.6-- with-mysql=mysqlnd-- with-pdo-mysql=mysqlnd-- with-mysqli=mysqlnd-- with-openssl-- enable-fpm-- enable-sockets-- enable-sysvshm-- enable-mbstring-- with-freetype-dir-- with-jpeg-dir-- with-png-dir-- with-zlib-- with-libxml-dir=/usr-enable-xml-- with-mhash-- with-mcrypt= / usr/local/libmcrypt-with-config-file-path=/etc-with-config-file-scan-dir=/usr/local/php5.6/etc/-with-bz2-enable-maintainer-zts-with-apxs2=/usr/local/httpd/bin/apxs
Compilation and installation
[root@apache1 php-5.6.36] # make
[root@apache1 php-5.6.36] # make install
Generate configuration file
[root@apache1 php-5.6.36] # cp php.ini-production / usr/local/php5.6/etc/php.ini
Add apache support php module
[root@apache1 ~] # vim / usr/local/httpd/conf/httpd.conf
AddType application/x-httpd-php .php .phtml
Create php test page, mysql connection page
[root@apache1 ~] # vim / usr/local/httpd/htdocs/index.php
[root@apache1 ~] # vim / usr/local/httpd/htdocs/test.php
Restart the httpd service, web page testing
[root@apache1 ~] # systemctl restart httpd
Http://192.168.43.33/
Apache2 and apache1 have the same configuration
Configure nginx load balancer
Modify the nginx main configuration file
[root@nginx1 ~] # vim / usr/local/nginx/conf/nginx.conf / / modify the following
Note: add the following to the gzip on
Upstream apache {
Server 192.168.43.33 weight=1
Server 192.168.43.34 weight=1
}
Note: add two lines below by commenting line 47 and line 48
Proxy_pass http://apache;
Proxy_redirect default
Restart the nginx server, edit the test file in apache, visit the web page to test
[root@apache1 ~] # vim / usr/local/httpd/htdocs/index.php
Php1
[root@nginx1] # nginx-s reload
Http://192.168.43.31/
Configure nginx for static and dynamic separation
Modify nginx configuration file
[root@nginx1 ~] # vim / usr/local/nginx/conf/nginx.conf
Add a module for web pages ending in .php to be processed by apache
Location. (php) ${
Proxy_pass http://apache;
}
Add static cache module
Location ~. *. (css | jss | ico | png | jpg | eot | svg | ttf | woff | htm | html | gif | jp eg | bmp | swf | ioc | rar | zip | txt | mid | doc | ppt | pdf | xls | mp3 | wma) ${
Root html
Expires 30d
}
[root@nginx1] # nginx-s reload
Test static and dynamic separation
Upload pictures under the web directory of the nginx server
[root@nginx1 ~] # cd / usr/local/nginx/html/
Delete the pictures on nginx and upload them under the web directory of the apache server.
[root@apache1 ~] # cd / usr/local/httpd/htdocs/
Set up a discuz forum (in apache)
Upload the discuz package and extract it to the specified directory
[root@apache1 ~] # rz
[root@apache1] # unzip Discuz_7.2_FULL_SC_UTF8.zip-d / usr/local/bbs
Copy the forum to the root of the website
[root@apache1 ~] # cd / usr/local/bbs/ enter the decompression directory
[root@apache1 bbs] # cp-r upload/ / usr/local/httpd/htdocs/bbs
Modify the php.ini file
[root@apache1 bbs] # cd / usr/local/php5.6/etc/
[root@apache1 etc] # vim php.ini / / Edit configuration file
Give bbs777 permission under the web page directory
[root@apache1] # chmod-R 777 / usr/local/httpd/htdocs/bbs/
Restart httpd and open the web page to install discuz
[root@apache1 etc] # systemctl restart httpd
Http://192.168.43.33/bbs/install/index.php
Transfer / usr/local/bbs/ and / ust/local/httpd/htdos/bbs on apache to nginx
[root@apache1] # scp-r / usr/local/bbs/ 192.168.43.31:/usr/local/
[root@apache1] # scp-r / usr/local/httpd/htdocs/bbs/ 192.168.43.31:/usr/local/nginx/html
Apache2 and apache1 have the same configuration
Nginx optimization
Set the number of nginx running processes
Check the number of cup
[root@nginx1 ~] # top you can view the number of cup by the number 1.
Modify the number of cpu in the nginx configuration file
[root@nginx1 ~] # nginx-s reload / / overload nginx
Check the number of nginx runs
[root@nginx1 ~] # ps-aux | grep nginx
Nginx runs cpu affinity 4-core 4-thread configuration
[root@nginx1 ~] # vim / usr/local/nginx/conf/nginx.conf
Set the maximum number of open files for nginx
[root@nginx1 ~] # vim / usr/local/nginx/conf/nginx.conf / / add the following
Rker_rlimit_nofile 102400
Modify the maximum number of open files in the system
Temporary modification:
[root@nginx1] # ulimit-n 102400
[root@nginx1 ~] # ulimit-n
Http main body optimization
Turn on efficient transmission mode
[root@nginx1 ~] # vim / usr/local/nginx/conf/nginx.conf / / modify configuration file
Connection timeout
[root@nginx1 ~] # vim / usr/local/nginx/conf/nginx.conf / / add the following
Tcp_nodelay on
Client_header_timeout 15
Client_body_timeout 15
Send_timeout 15
Fastcgi tuning
[root@nginx1 ~] # vim / usr/local/nginx/conf/nginx.conf / / add the following to the httpd tag
36 fastcgi_connect_timeout 300
37 fastcgi_send_timeout 300
38 fastcgi_read_timeout 300
39 fastcgi_buffer_size 64k
40 fastcgi_buffers 4 64k
41 fastcgi_busy_buffers_size 128k
42 fastcgi_temp_file_write_size 128k
43 # fastcgi_temp_path / data/ngx_fcgi_tmp
44 fastcgi_cache_path / data/ngx_fcgi_cache levels=2:2
45 keys_zone=ngx_fcgi_cache:512m
46 inactive=1d max_size=40g
Add the following to the server location tag
77 location. (php | php5)? $
78 {
79 fastcgi_pass 127.0.0.1:9000
80 fastcgi_index index.php
81 include fastcgi.conf
82 fastcgi_cache ngx_fcgi_cache
83 fastcgi_cache_valid 200 302 1h
84 fastcgi_cache_valid 301 1d
85 fastcgi_cache_valid any 1m
86 fastcgi_cache_min_uses 1
87 fastcgi_cache_use_stale error timeout invalid_header http_500
88 fastcgi_cache_key http://$host$request_uri;
89}
Detect the configuration file and restart
[root@nginx1 ~] # mkdir / data
[root@nginx1 ~] # nginx-t
Restart the nginx service
[root@nginx1 ~] # systemctl restart nginx
Gzip tuning
Enable gzip
[root@nginx1 ~] # vim / usr/local/nginx/conf/nginx.conf
Gzip on; opens this item.
Gzip_min_length 1k
Gzip_buffers 4 32k
Gzip_http_version 1.1
Gzip_comp_level 9
Gzip_types text/css text/xml application/javascript
Gzip_vary on
Copy test files
[root@nginx1 ~] # cp / etc/passwd / usr/local/nginx/html/passwd.html
test
Log cutting optimization
Create a log cutting script
[root@nginx1 ~] # cd / usr/local/nginx/logs/
[root@nginx1 logs] # vim cut_nginxlog.sh
#! / bin/bash
Date=$ (date +% F-d-1day)
Cd / usr/local/nginx/logs
If [!-d cut]; then
Mkdir cut
Fi
Mv access.log cut/access$ (date +% F-d-1day) .log
Mv error.log cut/error_$ (date +% F-d-1day) .log
/ usr/local/nginx/sbin/nginx-s reload
Tar-jcvf cut/$date.tar.bz2 cut/
Rm-rf cut/access & & rm-rf cut/error
Find-type f-mtime + 10 | xargs rm-rf
Add executable permissions to the script
[root@nginx1 logs] # chmod + x cut_nginx_log.sh
Add scheduled task
[root@nginx1 logs] # cat > > / var/spool/cron/root&1
Eof
Execute the script to view the results
[root@nginx1 logs] # sh cut_nginx_log.sh
[root@nginx1 logs] # ls / usr/local/nginx/logs/cut
Get rid of unnecessary log statistics
[root@nginx1 ~] # vim / usr/local/nginx/conf/nginx.conf / / add the following
Location ~. (js | jpg | jpeg | JPG | JPEG | css | bmp | gif | GIF) ${
Access_log off
}
Log format optimization
[root@nginx1 ~] # vim / usr/local/nginx/conf/nginx.conf / / remove comments
LVS+Keepalived load cluster
LVS environment installation configuration (lvs1/lvs2)
Install ipvsadm tools
[root@lvs1 ~] # yum-y install ipvsadm
[root@lvs2 ~] # yum-y install ipvsadm
Keepalived environment installation configuration
Install keepalived
[root@lvs1 ~] # yum-y install keepalived
[root@lvs2 ~] # yum-y install keepalived
Keepalived configuration
LVS1 configuration
[root@lvs1 ~] # cd / etc/keepalived/ # enter the keepalived installation directory
[root@lvs1 keepalived] # cp keepalived.conf keepalived.conf.bak # backup keepalived main configuration file
[root@lvs1 keepalived] # vim keepalived.conf # modify LVS1 configuration file
! Configuration File for keepalived
Global_defs {
Router_id LVS_01
}
Vrrp_instance VI_1 {
State MASTER
Interface ens33
Virtual_router_id 51
Priority 100
Advert_int 1
Authentication {
Auth_type PASS
Auth_pass 1111
}
Virtual_ipaddress {
192.168.43.100
}
}
Virtual_server 192.168.43.100 80 {
Delay_loop 6
Lb_algo rr
Lb_kind DR
Persistence_timeout 0
Protocol TCP
Real_server 192.168.43.31 80 {weight 1 TCP_CHECK {connect_port 80 connect_timeout 3 nb_get_retry 3 delay_before_retry 3}} real_server 192.168.43.32 80 {weight 1 TCP_CHECK {connect_port 80 connect_timeout 3 nb_get_retry 3 delay_before_retry 3}}
}
LVS2 configuration
[root@lvs2 ~] # cd / etc/keepalived/ # enter the keepalived installation directory
[root@lvs2 keepalived] # cp keepalived.conf keepalived.conf.bak # backup keepalived main configuration file
[root@lvs2 keepalived] # vim keepalived.conf # modify the main configuration file
! Configuration File for keepalived
Global_defs {
Router_id LVS_02
}
Vrrp_instance VI_1 {
State BACKUP
Interface ens33
Virtual_router_id 51
Priority 90
Advert_int 1
Authentication {
Auth_type PASS
Auth_pass 1111
}
Virtual_ipaddress {
192.168.43.100
}
}
Virtual_server 192.168.43.100 80 {
Delay_loop 6
Lb_algo rr
Lb_kind DR
Persistence_timeout 0
Protocol TCP
Real_server 192.168.43.31 80 {weight 1 TCP_CHECK {connect_port 80 connect_timeout 3 nb_get_retry 3 delay_before_retry 3}} real_server 192.168.43.32 80 {weight 1 TCP_CHECK {connect_port 80 connect_timeout 3 nb_get_retry 3 delay_before_retry 3}}
}
Start keepalived
[root@lvs1 ~] # systemctl start keepalived
[root@lvs2 ~] # systemctl start keepalived
View VIP
[root@lvs1 ~] # ip addr
[root@lvs2 ~] # ip addr
Test keepalived failover
[root@lvs1 ~] # systemctl stop keepalived # turn off the main keepalived
[root@lvs1 ~] # ip addr # View IP
Real server (web server) bind VIP
Do the same configuration on both nginx servers
Bind the vip address at the loop
[root@nginx1 ~] # vim / etc/sysconfig/network-scripts/ifcfg-lo:0
DEVICE=lo:0
IPADDR=192.168.43.100
NETMASK=255.255.255.255
ONBOOT=yes
[root@nginx1 ~] # systemctl restart network
[root@nginx1 ~] # ifconfig
Add VIP local route. Next hop is yourself.
[root@nginx1 ~] # route add-host 192.168.43.100 dev lo:0
[root@nginx1 ~] # vim / etc/rc.local # add boot auto-execution
Adjust ARP response
[root@nginx1 ~] # vim / etc/sysctl.conf # modify kernel file
Net.ipv4.conf.all.arp_ignore = 1
Net.ipv4.conf.all.arp_announce = 2
Net.ipv4.conf.default.arp_ignore = 1
Net.ipv4.conf.default.arp_announce = 2
Net.ipv4.conf.lo.arp_ignore = 1
Net.ipv4.conf.lo.arp_announce = 2
[root@nginx1 ~] # sysctl-p # reload kernel files
Test using VIP to access web pages
Http://192.168.43.100/
192.168.43.100
Elk+kafka introduces message queuing
Upload software package
[root@elk ~] # rz
[root@elk ~] # lselasticsearch-6.4.2.tar.gz
Elasticsearch-head-master.zip jdk-8u171-linux-x64.tar.gz kibana-6.4.2-linux-x86_64.tar.gz logstash-6.4.2.zip node-v4.9.1-linux-x64.tar.gz
Install JDK
[root@elk] # tar zxvf jdk-8u171-linux-x64.tar.gz-C / usr/local/
[root@elk ~] # vim / etc/profile / / configure environment variables
Export JAVA_HOME=/usr/local/jdk1.8.0_171
Export PATH=$JAVA_HOME/bin:$PATH
Export CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar
[root@elk ~] # source / etc/profile
[root@elk ~] # java-version
Java version "1.8.0,171"
Java (TM) SE Runtime Environment (build 1.8.0_171-b11)
Java HotSpot (TM) 64-Bit Server VM (build 25.171-b11, mixed mode)
Install elasticsearch
[root@elk] # tar-zxf elasticsearch-6.4.2.tar.gz-C / usr/local/
1. Since Elasticsearch cannot be enabled with root, create an authorized user
2. Elasticsearch requires that the maximum open book for system files is 65536.
3. Elasticsearch requires that the maximum number of threads be not less than 2048.
4. Elasticsearch requires no less than 262144 virtual memory.
Create a user
[root@elk ~] # useradd elk
[root@elk ~] # chown-Rf elk:elk / usr/local/elasticsearch-6.4.2
Modify system parameters
[root@elk ~] # vim / etc/security/limits.conf
Add at the end of the file:
# modify the maximum number of open files
Hard nofile 65536soft nofile 65536 modifies the maximum number of processes soft nproc 4096hard nproc 4096
[root@elk ~] # vim / etc/sysctl.conf
Add at the end of the file:
Vm.max_map_count=655360
[root@elk ~] # sysctl-p / / generate system configuration file
Start elasticsearch
[root@elk ~] # su elk
[elk@elk root] $cd / usr/local/elasticsearch-6.4.2/
[elk@elk elasticsearch-6.4.2] $vim config/elasticsearch.yml
Ensure that the following parameters are set correctly:
Cluster.name: name of the gsp # ES cluster
Node.name: node-1 # the node name
Path.data: / usr/local/elasticsearch-6.4.2/data # store log data directory
Path.logs: / usr/local/elasticsearch-6.4.2/log # elasticsearch own log directory. The above two directories must make sure that the working user has write permission.
Network.host: 0.0.0.0 # supports remote access
Http.port: 9200 # restful api access interface
Http.cors.enabled: true # allows ES head cross-domain access, add
Http.cors.allow-origin: "*" # allow ES head cross-domain access, add
[elk@elk elasticsearch-6.4.2] $vim config/jvm.options
Repair:
-Xms1g
-Xmx1g
Change to:
-Xms256m
-Xmx256m
[elk@elk elasticsearch-6.4.2] $mkdir-p data logs/ the folder created has write permission because it is currently a working user
[elk@elk elasticsearch-6.4.2] $. / bin/elasticsearch &
[elk@elk elasticsearch-6.4.2] $jps-m
1556 Elasticsearch
1594 Jps-m
test
[elk@elk elasticsearch-6.4.2] $curl 192.168.43.80 purl 9200
Install ElasticSearch-head (omitted)
As the web console of elasticsearch, it is extremely useful to do elasticsearch clustering.
[root@elk] # tar-zxf node-v4.9.1-linux-x64.tar.gz-C / usr/local/
[root@elk ~] # vim / etc/profile / / add environment variables
Export NODE_HOME=/usr/local/node-v4.9.1-linux-x64
PATH=$NODE_HOME/bin:$PATH:$HOME/.local/bin:$HOME/bin
Export NODE_PATH=$NODE_HOME/lib/node_modules
Export PATH
[root@elk ~] # unzip elasticsearch-head-master.zip
[root@elk ~] # cd elasticsearch-head-master
Network and enter:
[root@elk elasticsearch-head-master] # npm config set registry https://registry.npm.taobao.org
[root@elk elasticsearch-head-master] # npm install
[root@elk elasticsearch-head-master] # vim Gruntfile.js
If it is only local access, the following configuration changes are not required. If you want other machines to access, you need to modify it. Generally speaking, as long as it is a server application, it is accessed remotely.
Install Logstash
[root@elk] # unzip logstash-6.4.2.zip-d / usr/local/
[root@elk ~] # cd / usr/local/logstash-6.4.2/config
Generate a configuration file:
[root@elk config] # cp logstash-sample.conf logstash.conf
[root@elk config] # vim logstash.conf
Input {
File {
Path = > "/ var/log/messages" # Collection source, that is, log location
Start_position = > "beginning" # where to start monitoring, beginning means to monitor from the beginning of the logstash process (commonly used beginning)
}
}
Filter {# Log filtering rules
If [path] = ~ "access" {
Mutate {replace = > {"type" = > "apache_access"}}
Grok {
Match = > {"message" = > "% {COMBINEDAPACHELOG}"}
}
}
Date {
Match = > ["timestamp", "dd/MMM/yyyy:HH:mm:ss Z"] # format log time, otherwise incorrect log time may occur
}
}
Output {
Elasticsearch {
Hosts = > ["192.168.43.80 elasticsearch 9200"] / / the IP address is the elasticsearch address
Index = > "logstash-var-massages-% {+ YYYY.MM.dd}" # Index name, which is searched on kibana to display the collected logs
}
Stdout {codec = > rubydebug} # output to the current terminal
}
* cancel all comments
Official reference document page: https://www.elastic.co/guide/en/logstash/current/index.html
[root@elk config] # cd.. / / return to the previous directory
[root@elk logstash-6.4.2] # nohup. / bin/logstash-f. / config/logstash.conf & / / # starts in the background and imports the output information into nohup.out
[root@elk logstash-6.4.2] # tail-F nohup.out
Install kibana
Extract the installation package
[root@elk] # tar-zxf kibana-6.4.2-linux-x86_64.tar.gz-C / usr/local/
Compile configuration file
[root@elk ~] # cd / usr/local/kibana-6.4.2-linux-x86_64/config
[root@elk config] # vim kibana.yml
Modify:
Server.port: 5601 # your own listening port
Server.host: "192.168.43.80" # own IP address
Elasticsearch.url: "http://192.168.43.30:9200" # here is the IP address of elasticsearch and its port
Start the service
[root@elk~] # / usr/local/kibana-6.4.2-linux-x86_64/bin/kibana &
test
192.168.43.80:5601
Here you can see the index name of our configuration file on logstash. Kibana uses the wildcard rule to verify the index name:
Here, combined with the official documents, you can customize the edits according to the actual needs:
Click on the Discover on the left to see the logs we collected
This is usually due to the fact that the default query time range is too short, so you can set the query time range through the TimeRange in the upper right corner.
At this point, the environment building and basic configuration of ELK is complete.
For more configuration and optimization, see the official documentation https://www.elastic.co/guide/index.html.
Install Filebeat on the collected end to realize docking with Logstash
Nginx logs are collected here, and ELK+Kafka refers to the following
Upload software package
[root@filebeat ~] # rz
Decompression package
[root@filebeat] # tar-zxf filebeat-6.4.2-linux-x86_64.tar.gz-C / usr/local/
Install jdk
[root@filebeat] # tar zxvf jdk-8u171-linux-x64.tar.gz-C / usr/local/
[root@filebeat ~] # vim / etc/profile / / configure environment variables
Export JAVA_HOME=/usr/local/jdk1.8.0_171
Export PATH=$JAVA_HOME/bin:$PATH
Export CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar
[root@filebeat ~] # source / etc/profile
[root@filebeat ~] # java-version
Install logstash
[root@filebeat] # unzip logstash-6.4.2.zip-d / usr/local/
[root@filebeat ~] # cd / usr/local/logstash-6.4.2/config
Generate a configuration file:
[root@filebeat config] # cp logstash-sample.conf logstash.conf
[root@filebeat config] # vim logstash.conf
Input {
File {
Path = > "/ var/log/messages" # Collection source, that is, log location
Start_position = > "beginning" # where to start monitoring, beginning means to monitor from the beginning of the logstash process (commonly used beginning)
}
}
Filter {# Log filtering rules
If [path] = ~ "access" {
Mutate {replace = > {"type" = > "apache_access"}}
Grok {
Match = > {"message" = > "% {COMBINEDAPACHELOG}"}
}
}
Date {
Match = > ["timestamp", "dd/MMM/yyyy:HH:mm:ss Z"] # format log time, otherwise incorrect log time may occur
}
}
Output {
Elasticsearch {
Hosts = > ["192.168.43.80 9200"]
Index = > "filebeat-var-massages-% {+ YYYY.MM.dd}" # Index name, which is searched on kibana to display the collected logs
}
Stdout {codec = > rubydebug} # output to the current terminal
}
* cancel all comments
Official reference document page: https://www.elastic.co/guide/en/logstash/current/index.html
[root@filebeat config] # cd.. / / return to the previous directory
[root@filebeat logstash-6.4.2] # nohup. / bin/logstash-f. / config/logstash.conf & / / # starts in the background and imports the output information into nohup.out
[root@filebeat logstash-6.4.2] # tail-F nohup.out
Compile configuration file
[root@filebeat ~] # cd / usr/local/filebeat-6.4.2-linux-x86_64/
[root@filebeat filebeat-6.4.2-linux-x86_64] # vim filebeat.yml
Modify the following:
Type: log
Enabled: true # is enabled by default
Paths: "/ usr/local/nginx/logs/*.log" officially recommends that a service be applied to a log collection stream
Scan_frequency: 60
Type: log # for different log content types, you need to redefine a log flow
Paths:/var/log/messages
Scan_frequency: 60
Output.logstash: # look for "output.logstash" in the configuration file, because filebeat has defined the input and output configuration ranges, otherwise an error may be reported
Hosts: ["192.168.43.80 IP 5044"] # IP address of the logstash and its port number
Configure the configuration file on Logstash
[root@filebeat ~] # cd / usr/local/logstash-6.4.2/
[root@filebeat logstash-6.4.2] # vim config/nginxconf.conf
Input {
Beats {
Port = > 5044
}
}
Output {
Elasticsearch {
Hosts = > ["http://192.168.43.80:9200"]"
Index = > "% {[@ metadata] [beat]} -% {[@ metadata] [version]} -% {+ YYYY.MM.dd}"}
}
[root@filebeat logstash-6.4.2] # jps-m / / Kill the original process
22257 Logstash-f. / config/logstash.conf
22332 Jps-m
[root@filebeat logstash-6.4.2] # kill-9 22257
[root@filebeat logstash-6.4.2] # nohup. / bin/logstash-f. / config/nginxlog.conf &
[root@filebeat logstash-6.4.2] # tail-F nohup.out
Start the filebeat process
[root@filebeat logstash-6.4.2] #. / filebeat-e-c filebeat.yml &
Verification
If you can see the collected logs, you can succeed!
Introduction of message queuing ELK+kafka
Install nginx
Upload software package
[root@logstash ~] # ls
[root@logstash] # tar-zxvf nginx-1.10.3.tar.gz-C / usr/local/src/
[root@logstash ~] # cd / usr/local/src/nginx-1.10.3/
Install the nginx dependency package
[root@logstash nginx-1.10.3] # yum install-y gcc gcc-c++ autoconf automake zlib zlib-devel openssl openssl-devel pcre pcre-devel
Pre-compilation
[root@logstash nginx-1.10.3] # / configure-- prefix=/usr/local/nginx-- with-http_dav_module-- with-http_stub_status_module-- with-http_addition_module-- with-http_sub_module-- with-http_flv_module-- with-http_mp4_module-- with-pcre
Compilation and installation
[root@logstash nginx-1.10.3] # make-j 4 & & make install
Start nginx
[root@logstash nginx-1.10.3] # / usr/local/nginx/sbin/nginx
View port number
[root@logstash nginx-1.10.3] # netstat-antup | grep 80
Tcp 0 0 0.0.0 0 master 80 0.0.0 0. 0. 0 master
Install logstash on nginx (install zookeeper+kafka)
Install zookeeper
[root@logstash ~] # ls
Kafka_2.12-2.2.0.tgz apache-zookeeper-3.5.5-bin.tar.gz
Extract the installation package
[root@logstash] # tar-zxf apache-zookeeper-3.5.5-bin.tar.gz-C / usr/local/
-create a snapshot log directory
[root@logstash] # mkdir-p / data/zk/data
-create a directory for storing transaction logs
[root@logstash] # mkdir-p / data/zk/datalog
-generate configuration file
[root@logstash ~] # cd / usr/local/apache-zookeeper-3.5.5-bin/conf/
[root@logstash conf] # ls
Configuration.xsl log4j.properties zoo_sample.cfg
[root@filebeat conf] # cp zoo_sample.cfg zoo.cfg
-modify the main configuration file zoo.cfg
[root@logstash conf] # vim zoo.cfg
DataDir=/data/zk/data # modify the directory we created for this behavior
DataLogDir=/data/zk/datalog # add this line
-add path environment variable
[root@logstash ~] # vim / etc/profile
Export ZOOKEEPER_HOME=/usr/local/apache-zookeeper-3.5.5-bin
Export PATH=$ZOOKEEPER_HOME/bin:$PATH
[root@logstash ~] # source / etc/profile
Start zookeeper
[root@logstash ~] # zkServer.sh start
ZooKeeper JMX enabled by default
Using config: / usr/local/apache-zookeeper-3.5.5-bin/bin/../conf/zoo.cfg
Starting zookeeper... STARTED
Verification
[root@logstash ~] # zkServer.sh status
ZooKeeper JMX enabled by default
Using config: / usr/local/apache-zookeeper-3.5.5-bin/bin/../conf/zoo.cfg
Client port found: 2181. Client address: localhost.
Mode: standalone
Add boot boot
[root@logstash ~] # echo "zkServer.sh start" > > / etc/rc.local
[root@logstash ~] # chmod + x / etc/rc.local
Install kafka
-unpack the installation package
[root@filebeat] # tar-zxvf kafka_2.12-2.2.0.tgz-C / usr/local/
[root@filebeat] # cd / usr/local/kafka_2.12-2.2.0 /
[root@filebeat kafka_2.12-2.2.0] # ls
Bin config libs LICENSE NOTICE site-docs
-modify the main configuration file
[root@filebeat ~] # vim / usr/local/kafka_2.12-2.2.0/config/server.properties
The globally unique number of the broker, which cannot be repeated
Broker.id=0
Monitor
Listeners=PLAINTEXT://:9092 # enable this item
Log directory
Log.dirs=/data/kafka/log # modify log directory
Configure the connection for zookeeper (ip or hostname is required if it is not native)
Zookeeper.connect=localhost:2181
-create a log directory
[root@filebeat] # mkdir-p / data/kafka/log
-add path environment variable
[root@filebeat ~] # vim / etc/profile
Export KAFKA_HOME=/usr/local/kafka_2.12-2.2.0
Export PATH=$KAFKA_HOME/bin:$PATH
[root@filebeat ~] # source / etc/profile
-start kafka
[root@filebeat ~] # kafka-server-start.sh $KAFKA_HOME/config/server.properties
-start kafka in the background
[root@filebeat] # kafka-server-start.sh-daemon / usr/local/kafka_2.12-2.2.0/config/server.properties
[root@logstash ~] # ls
Install jdk
[root@logstash] # tar zxvf jdk-8u171-linux-x64.tar.gz-C / usr/local/
[root@logstash ~] # vim / etc/profile / / configure environment variables
Export JAVA_HOME=/usr/local/jdk1.8.0_171
Export PATH=$JAVA_HOME/bin:$PATH
Export CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar
[root@logstash ~] # source / etc/profile
[root@logstash ~] # java-version
Decompression package
[root@logstash] # unzip logstash-6.4.2.zip-d / usr/local/
Generate configuration file
[root@logstash ~] # cd / usr/local/logstash-6.4.2/config
[root@logstash config] # cp logstash-sample.conf logstash.conf
[root@logstash config] # cd..
[root@logstash logstash-6.4.2] # vim config/logstash.conf
Input {
File {
Path = > "/ usr/local/nginx/logs/access.log" # nginx's log log directory
Start_position = > "beginning" # log when the logstash process is started
}
}
Output {
Kafka {
Bootstrap_servers = > "192.168.43.81 9092" # output the log to kafka
Topic_id = > ["test"] # the name of the topic we created on kafka
}
}
[root@logstash logstash-6.4.2] # nohup. / bin/logstash-f. / config/logstash.conf &
[root@logstash logstash-6.4.2] # tail-F nohup.out
[root@filebeat logstash-6.4.2] # nohup. / bin/logstash-f. / config/logstash.conf &
[root@filebeat logstash-6.4.2] # tail-F nohup.out
[root@filebeat logstash-6.4.2] # jps-m
23509 Logstash-f. / config/logstash.conf
[root@filebeat ~] # cd / usr/local/bin
[root@filebeat] # kafka-console-consumer.sh-- bootstrap-server localhost:9092-- topic test-- from-beginning / / start the consumer
Visit nginx
Configure Logstash on kafka (the collected log side)
Decompression package
[root@filebeat] # tar-zxf kibana-6.4.2-linux-x86_64.tar.gz-C / usr/local/
Generate configuration file
[root@filebeat ~] # cd / usr/local/logstash-6.4.2/config
[root@filebeat config] # vim logstash_for_kafka.conf
Input {
Kafka {
Bootstrap_servers = > "192.168.43.81 IP 9092192.168.43.81 IP 9093192.168.43.81 IP 9094" # cluster and its ports separated by commas
Topics = > ["test"] # the name of the topic we created in kafka
Id = > "GspTest" # insert a unique ID for this log stream to help monitor the log
Decorate_events = > true # outputs its own information when outputting the message, including the size of the consumption message, the topic source and the group information of the consumer
Consumer_threads = > 5 # set this option to achieve load balancing when reading logs. A partition corresponds to a consumer consumption (a thread), and in kafka, a process corresponds to a thread, so use "ps-aux | grep kafka | wc-l" to view and set this item. If this setting has more threads than kafka, it will leave extra threads idle (to put it bluntly, it doesn't matter if you give too much).
}
}
Output {
Elasticsearch {
Hosts = > ["192.168.43.80 9200"]
Index = > "logstash-kafka-% {+ YYYY.MM.dd}"
}
}
[root@filebeatconfig] # cd..
[root@filebeat logstash-6.4.2] # nohup. / bin/logstash-f. / config/logstash_for_kafka.conf &
[root@filebeat logstash-6.4.2] # tail-F nohup.out
[root@filebeat logstash-6.4.2] # jps-m
24032 Logstash-f. / config/logstash_for_kafka.conf
24445 Jps-m
test
Visit nginx multiple times, then visit kibana,192.168.43.80:5601
Zabbix
IP address hostname role
192.168.43.20 Zabbix-server Zabbix-server
192.168.43.21 Zabbix-web Zabbix-web
Establish time synchronization environment and build time synchronization server on zabbix-server
Install NTP (turn off firewall / selinux)
[root@zabbix-server ~] # yum-y install ntp
Configure NTP
[root@zabbix-server~] # vim / etc/ntp.conf
Server 127.127.1.0 # Local time supply Source
Fudge 127.127.1.0 stratum 8 # sets the time zone to zone + 08
Restart the service and set it to boot
[root@zabbix-server ~] # systemctl start ntpd
[root@zabbix-server ~] # systemctl enable ntpd
Time synchronization on other servers (turn off firewall / selinux)
[root@mha-master ~] # yum-y install ntpdate
[root@zabbix-server ~] # ntpdate 192.168.43.20 / / address write your own time server
Create a zabbix database and authorized users:
Authorization
[root@zabbix-server ~] # scp zabbix-3.2.6.tar.gz 192.168.43.110:/root
[root@zabbix-server ~] # ssh 192.168.43.110
[root@mha-master] # mysql-uroot-p123456
-- >
CREATE DATABASE zabbix
GRANT ALL ON zabbix. TO 'zabbix'@'192.168.43.%' IDENTIFIED BY' 123456'
GRANT ALL ON zabbix. TO 'zabbix'@'localhost' IDENTIFIED BY' 123456'
GRANT ALL ON zabbix.* TO 'zabbix'@'mha-master' IDENTIFIED BY' 123456'
FLUSH PRIVILEGES
Import the database file:
Download address
[root@mha-master ~] # wget https://sourceforge.net/projects/zabbix/files/ZABBIX%20Latest%20Stable/3.2.6/zabbix-3.2.6.tar.gz/download
It can also be downloaded and uploaded
Import database
[root@mha-master] # tar xf zabbix-3.2.6.tar.gz-C / usr/local/src/
[root@mha-master ~] # cd / usr/local/src/zabbix-3.2.6/database/mysql/
[root@mha-master mysql~] # mysql-uzabbix-p123456 zabbix
< schema.sql [root@mha-master mysql~]#mysql -uzabbix -p123456 zabbix < images.sql [root@mha-master mysql~]#mysql -uzabbix -p123456 zabbix < data.sql #导入顺序不能错 安装Zabbix-Server服务器 在zabbix-server上编译安装zabbix: 安装依赖包 [root@zabbix-server ~]#yum -y install mysql-devel libxml2-devel net-snmp-devel libcurl-devel gcc [root@zabbix-server ~]#tar xf zabbix-3.2.6.tar.gz -C /usr/local/src/ [root@zabbix-server ~]#cd /usr/local/src/zabbix-3.2.6/ 创建用户: [root@zabbix-server zabbix-3.2.6~]#useradd -s /sbin/nologin zabbix 创建目录: [root@zabbix-server zabbix-3.2.6~]#mkdir -p /data/zabbix/logs 给予权限: [root@zabbix-server zabbix-3.2.6~]#chown -R zabbix:zabbix /data/zabbix/logs/ 预编译 [root@zabbix-server zabbix-3.2.6~]#./configure --prefix=/usr/local/zabbix --enable-server --with-mysql --with-net-snmp --with-libcurl --with-libxml2 安装 [root@zabbix-server zabbix-3.2.6~]# make install 编辑配置文件并启动: [root@zabbix-server ~]# cd /usr/local/zabbix/etc/ [root@zabbix-server etc~]#cp zabbix_server.conf zabbix_server.conf.bak [root@zabbix-server etc~]#vim /usr/local/zabbix/etc/zabbix_server.conf LogFile=/data/zabbix/logs/zabbix_server.log DBHost=192.168.43.110 #数据库的地址,取消注释 DBName=zabbix #数据库的名 DBUser=zabbix #数据库的用户 DBPassword=123456 #数据库用户的密码(添加,或者取消注释) ListenIP=127.0.0.1,192.168.43.20启动服务端 [root@zabbix-server etc~]#/usr/local/zabbix/sbin/zabbix_server 查看监听端口' [root@zabbix-server ~]# netstat -anput |grep zabbix_server 设置启动脚本 [root@zabbix-server etc]# cd /usr/local/src/zabbix-3.2.6/ [root@zabbix-server etc]# cp misc/init.d/tru64/zabbix_server /etc/init.d/ [root@zabbix-server etc]# chmod +x /etc/init.d/zabbix_server 做软连接 [root@zabbix-server etc]# ln -s /usr/local/zabbix/sbin/ /usr/local/sbin/ [root@zabbix-server etc]# ln -s /usr/local/zabbix/bin/ /usr/local/bin/ 设置自启动 [root@zabbix-server etc]# vim /etc/rc.d/init.d/zabbix_server #在第二行添加如下内容 #chkconfig: 2345 10 90 #description: zabbix server 注: 此例中, 在chkconfig后面的数字345表示是默认运行的级别. 在这个例子中, 此服务将会在级别3 , 4, 5启动. 数字10代表启动的优先级别. 数字越低,优先级越高. 数字90代表关闭的优先级别. 数字越低,优先级越高. 保存后退出,并执行 [root@zabbix-server etc]#chkconfig --add zabbix_server [root@zabbix-server etc]#chkconfig zabbix_server on [root@zabbix-server etc]#service zabbix_server restart 安装Zabbix-Web服务器(编译安装nginx) 在Zabbix-Web主机上安装nginx和php: Nginx安装及优化 上传软件包并解压 [root@zabbix-web~]# ls anaconda-ks.cfg nginx-1.10.3.tar.gz [root@zabbix-web~]# tar -zxvf nginx-1.10.3.tar.gz -C /usr/local/src/ 更改源码隐藏软件名称和版本号 [root@zabbix-web~]# cd /usr/local/src/nginx-1.10.3/ [root@zabbix-webnginx-1.10.3]# vim src/core/nginx.h #修改标红部分 13 #define NGINX_VERSION "8.8.8" #修改版本号 14 #define NGINX_VER "web/" NGINX_VERSION #修改服务器名称 [root@zabbix-webnginx-1.10.3]# vim src/http/ngx_http_header_filter_module.c 49 static char ngx_http_server_string[] = "Server: web" CRLF; #修改标红部分 #修改HTTP头信息中的connection字段,防止回显具体版本号 拓展:通用http头域 通用头域包含请求和响应消息都支持的头域,通用头域包含Cache-Control(缓存控制)、 Connection(连接)、Date(日期)、Pragma(短语)、Transfer-Encoding(传输编码)、Upgrade(升级)、Via。对通用头域的扩展要求通讯双方都支持此扩展,如果存在不支持的通用头域,一般将会作为实体头域处理。那么也就是说有部分设备,或者是软件,能获取到connection,部分不能,要隐藏就要彻底! [root@zabbix-webnginx-1.10.3]# vim src/http/ngx_http_special_response.c #这个文件定义了http错误码的返回,有时候我们页面程序出现错误,Nginx会代我们返回相应的错误代码,回显的时候,会带上nginx和版本号,我们把他隐藏起来.(防止网页出错时显示版本号) 22 "" NGINX_VER "" CRLF #老版本这里需要修改为web,现在这里不需要修改,因为它调用了NGINX_VER变量 安装nginx依赖包 [root@zabbix-webnginx-1.10.3]# yum install -y gcc gcc-c++ autoconf automake zlib zlib-devel openssl openssl-devel pcre pcre-devel 预编译 [root@zabbix-webnginx-1.10.3]# ./configure --prefix=/usr/local/nginx --with-http_dav_module --with-http_stub_status_module --with-http_addition_module --with-http_sub_module --with-http_flv_module --with-http_mp4_module --with-pcre --with-http_dav_module #启用支持(增加PUT,DELETE,MKCOL:创建集合,COPY和MOVE方法) 默认关闭,需要编译开启 --with-http_stub_status_module #启用支持(获取Nginx上次启动以来的工作状态) --with-http_addition_module #启用支持(作为一个输出过滤器,支持不完全缓冲,分部分相应请求) --with-http_sub_module #启用支持(允许一些其他文本替换Nginx相应中的一些文本) --with-http_flv_module #启用支持(提供支持flv视频文件支持) --with-http_mp4_module #启用支持(提供支持mp4视频文件支持,提供伪流媒体服务端支持) --with-pcre #需要注意,这里指的是源码,用#./configure --help |grep pcre查看帮助,如果源码编译pcre,需要通过--with-pcre=编译安装pcre路径 编译安装 [root@zabbix-webnginx-1.10.3]# make -j 4 && make install 启动nginx [root@zabbix-web~]# /usr/local/nginx/sbin/nginx 查看端口号 [root@zabbix-web~]# netstat -antup | grep 80 tcp 0 0 0.0.0.0:80 0.0.0.0: LISTEN 7423/nginx: master 测试 [root@zabbix-web~]# curl -I 192.168.43.21 HTTP/1.1 200 OK Server: web/8.8.8 Date: Wed, 03 Jul 2019 08:34:09 GMT Content-Type: text/html Content-Length: 612 Last-Modified: Wed, 03 Jul 2019 08:31:22 GMT Connection: keep-alive ETag: "5d1c67da-264" Accept-Ranges: bytes 网站测试 http://192.168.43.21/ 查看nginx当前运行账号 [root@zabbix-web~]# ps -aux | grep nginx #默认是nobody用户 创建nginx账号 [root@zabbix-web~]# useradd -M -s /sbin/nologin nginx 修改nginx运行账号 [root@zabbix-web~]# vim /usr/local/nginx/conf/nginx.conf 改: #user nobody; 为: user nginx; 添加path变量 [root@zabbix-web~]# ln -s /usr/local/nginx/sbin/ /usr/local/bin/ 重载nginx [root@zabbix-web~]# nginx -s reload 查看运行账号 [root@zabbix-web~]# ps -aux | grep nginx 在这里我们还可以看到在查看的时候,work进程是nginx用户了,但是master进程还是root 其中,master是监控进程,也叫主进程,work是工作进程. 所以我们可以master监控进程使用root,可以是降级使用普通用户,如果都是用普用户,那么编译安装的时候,是用普通用户执行,sudo方式操作!可以直接理解为master是管理员,work进程才是为用户提供服务的! 生成服务启动脚本 [root@zabbix-web~]# vim /etc/init.d/nginx #!/bin/bash chkconfig: - 99 2description: Nginx Service Control Script PROG="/usr/local/nginx/sbin/nginx" PIDF="/usr/local/nginx/logs/nginx.pid" case "$1" in start) $PROG ;; stop) kill -3 $(cat $PIDF) ;; restart) $0 stop &>/ dev/null
If [$?-ne 0]; then continue; fi
$0 start
Reload)
Kill-1 $(cat $PIDF)
*)
Echo "Userage: $0 {start | stop | restart | reload}"
Exit 1
Esac
Exit 0
Configure the service to boot automatically
[root@zabbix-web~] # chmod + x / etc/init.d/nginx # add executable permissions to the script
[root@zabbix-web~] # chkconfig-- add nginx # adds nginx as a system service
[root@zabbix-web~] # chkconfig nginx on # add nginx to boot self-startup
[root@zabbix-web~] # chkconfig-- list nginx # View nginx boot startup items
Php installation
In Nginx, we use php-fpm to parse php pages, and PHP-FPM is actually a patch to PHP source code that integrates FastCGI process management into the PHP package. You must patch it into your PHP source code, and then compile and install PHP before you can use it.
Starting from PHP5.3.3, PHP-FPM is directly integrated into PHP, so after the PHP5.3.3 version, there is no need to download the PHP-FPM patch pack. Here is the official notice from PHP-FPM:
Http:#php-fpm.org/download
Installation dependencies:
[root@zabbix-web ~] # yum-y install php-mcrypt libmcrypt libmcrypt-devel autoconf freetype gd libmcrypt libpng libpng-devel libjpeg libxml2 libxml2-devel zlib curl curl-devel re2c bzip2-devel libmcrypt-devel freetype-devel libjpeg-devel
Install libmcrypt
[root@zabbix-web ~] # ls
Anaconda-ks.cfg libmcrypt-2.5.7.tar.gz nginx-1.10.3.tar.gz php-5.6.36.tar.gz
[root@zabbix-web ~] # tar zxf libmcrypt-2.5.7.tar.gz
[root@zabbix-web ~] # cd libmcrypt-2.5.7/
[root@zabbix-web libmcrypt-2.5.7] # / configure-- prefix=/usr/local/libmcrypt & & make & & make install
Extract the PHP package
[root@zabbix-web] # tar-zxvf php-5.6.36.tar.gz-C / usr/local/src/
Pre-compilation
[root@zabbix-web ~] # cd / usr/local/src/php-5.6.36/
[root@zabbix-web php-5.6.36] # / configure-- prefix=/usr/local/php5.6-- with-mysql=mysqlnd-- with-pdo-mysql=mysqlnd-- with-mysqli=mysqlnd-- with-openssl-- enable-fpm-- enable-sockets-- enable-sysvshm-- enable-mbstring-- with-freetype-dir-- with-jpeg-dir-- with-png-dir-- with-zlib-- with-libxml-dir=/usr-enable-xml-- with-mhash-- with -mcrypt=/usr/local/libmcrypt-with-config-file-path=/etc-- with-config-file-scan-dir=/usr/local/php5.6/etc/-- with-bz2-- enable-maintainer-zts-- with-png-dir-- with-freetype-dir-- with-jpeg-dir-- with-gd-- enable-bcmath-- with-gettext
Parameter option
Php configuration options Chinese manual
Http://php.net/manual/zh/configure.about.php
Explanation of the relevant options:
-- prefix=/usr/local/php5.6 / / installation location
-- with-mysql=mysqlnd / / supports mysql
-- with-pdo-mysql=mysqlnd / / supports pdo module
-- with-mysqli=mysqlnd / / supports mysqli module
Note: the role of the above three options: the database and php are not on the same server, specify this way, and install the database connection driver
-- with-apxs2 # compiles php to a module of Apache for use
-- support for enable-mbstring # multibyte strings
-- with-curl # supports cURL
-- with-gd # supports gd libraries
-- enable-fpm # supports building fpm
-- with-config-file-path # sets the configuration file path
-- with-openssl # supports openssl module
-- enable-fpm # supports fpm mode
-- enable-sockets # enable socket support
-- enable-sysvshm # enable system shared memory support
-- enable-mbstring # multi-byte strings, like our Chinese is multi-byte strings
-- with-freetype-dir # supports freetype, freetype-devel, font-related, font parsing tools
-- with-jpeg-dir
-- with-png-dir
Note: the function of the above two options: deal with jpeg, png pictures, php can dynamically generate jpeg pictures
-- with-zlib # is a compression library that is used to compress transmission over the Internet.
-- with-libxml-dir=/usr # this libxml is used to parse xml, specified under / usr
-- enable-xml # supports xml
-- with-mhash # supports mhash
-- with-mcrypt=/usr/local/libmcrypt # libmcrypt-devel specified by this package
-- with-config-file-path=/usr/local/php5.6/etc # specifies the path where the configuration file is stored
-- with-config-file-scan-dir=/etc/php.d # profile scan path
-- with-bz2 # supports BZip2
If you are using a version above PHP5.3, you can specify mysqlnd in order to link to the MySQL database, so that you do not need to install MySQL or MySQL development packages on the machine first. Mysqlnd is available since php 5.3 and can be bound to it at compile time (without having to rely on specific MySQL client library bindings), but it has been the default since PHP 5.4.
Compile
Type top, and then press 1 to see how many cores you are
[root@zabbix-web php-5.6.36] # make-j 4 # can specify several more kernel acceleration installations when doing experiments
Installation
[root@zabbix-web php-5.6.36] # make install
Modify the name of the fpm configuration php-fpm.conf.default file
[root@zabbix-web ~] # cd / usr/local/php5.6/etc/
[root@zabbix-web etc] # cp php-fpm.conf.default php-fpm.conf
Modify the default running account
Modify the default running user, group is nginx
[root@zabbix-web etc] # vim php-fpm.conf
User = nginx
Group = nginx
Generate php.ini configuration file
[root@zabbix-web ~] # cp / usr/local/src/php-5.6.36/php.ini-production / usr/local/php5.6/etc/php.ini
Copy the php-fpm startup script to init.d
[root@zabbix-web ~] # cp / usr/local/src/php-5.6.36/sapi/fpm/init.d.php-fpm / etc/init.d/php-fpm
Grant executive authority
[root@zabbix-web ~] # chmod + x / etc/init.d/php-fpm
Add boot boot
[root@zabbix-web] # chkconfig-- add php-fpm
[root@zabbix-web ~] # chkconfig php-fpm on
Start the service
[root@zabbix-web ~] # / etc/init.d/php-fpm start
Starting php-fpm done
View port snooping status
[root@zabbix-web ~] # netstat-antpu | grep php-fpm
Tcp 00 127.0.0.1:9000 0.0.0.0: LISTEN 7767/php-fpm: maste
Configure nginx to support index.php
Modify the configuration file
[root@zabbix-web ~] # vim / usr/local/nginx/conf/nginx.conf
# add the following red section
Location / {
Root html
Index index.php index.html index.htm; # add index.php
}
# cancel the comments below and pay attention to modifying the path
Location ~ .php ${
Root html
Fastcgi_pass 127.0.0.1:9000
Fastcgi_index index.php
Fastcgi_param SCRIPT_FILENAME / usr/local/nginx/html$fastcgi_script_name
Include fastcgi_params
}
# write the root of the website
Create a test page
[root@zabbix-web ~] # vim / usr/local/nginx/html/index.php # php test page
[root@zabbix-web ~] # vim / usr/local/nginx/html/test.php # Test mysql connection page
[root@localhost ~] # nginx-t
[root@localhost] # nginx-s reload
Create a mysql test account in the database
Restore 192.168.1.12 install the mysql snapshot for the source code and create an authorized test account
[root@mha-master] # mysql-uroot-p123456
Mysql > grant all on. To test@'%' identified by '123456'
Mysql > flush privileges
Mysql > show grants for test; # check the test user authorization
test
Http://192.168.43.21/
Cancel authorization
Mysql > revoke all on. * from 'test'@'%'; # revoke authorization
Mysql > show grants for test
Configure the web page of zabbix
[root@zabbix-web ~] # wget wget https://sourceforge.net/projects/zabbix/files/ZABBIX%20Latest%20Stable/3.2.6/zabbix-3.2.6.tar.gz/download # has been downloaded this time, just upload it
[root@zabbix-web] # tar xf zabbix-3.2.6.tar.gz-C / usr/local/src/
[root@zabbix-web ~] # mkdir / usr/local/nginx/html/zabbix/
[root@zabbix-web ~] # cd / usr/local/src/zabbix-3.2.6/frontends/php/
[root@zabbix-web php] # cp-a. / usr/local/nginx/html/zabbix/
View the current system time zone
[root@zabbix-web php] # timedatectl
Modify the configuration file to support zabbix
[root@zabbix-web ~] # vim / usr/local/php5.6/etc/php.ini
Date.timezone = Asia/Shanghai
Post_max_size = 16m
Max_execution_time = 300
Max_input_time = 300
Always_populate_raw_post_data =-1
[root@zabbix-web php] # / etc/init.d/php-fpm restart
Install zabbix
Use a browser to access http://192.168.43.21/zabbix and follow the prompts to install:
All ok can continue to do so.
If you specify zabbix-server incorrectly, you can modify it in the / usr/local/nginx/html/zabbix/conf/zabbix.conf.php configuration file.
If you want to reassign the address of the database and nginx, you can specify it in the configuration file
Vim / usr/local/nginx/html/zabbix/conf/zabbix.conf.php / / manually write to the configuration
Zabbix user Management
Modify the password
Change the password to 123456 and change the language to Chinese
Create a user
Click add to join the corresponding group
Upload simkai.ttf package to solve Zabbix Chinese garbled and upload simkai.ttf package
[root@zabbix-web ~] # cp simkai.ttf / usr/local/nginx/html/zabbix/fonts
[root@zabbix-web ~] # ls / usr/local/nginx/html/zabbix/fonts
[root@zabbix-web ~] # vim / usr/local/nginx/html/zabbix/include/defines.inc.php
Define ('ZBX_FONT_NAME',' simkai'); (modified)
[root@zabbix-web] # nginx-s reload
Install the Zabbix-Agent side
Install on the mha-slave1 host:
Copy a file
[root@zabbix-server ~] # scp zabbix-3.2.6.tar.gz 192.168.43.12:/root # copy files
[root@zabbix-server ~] # ssh 192.168.43.12
Install GCC on the client server
[root@mha-slave1 ~] # yum install-y gcc
Synchronization time
[root@mha-slave1 ~] # yum-y install ntpdate # synchronization time
[root@mha-slave1 ~] # ntpdate 192.168.43.20
Install zabbix-agent
# or upload zabbix-3.2.6.tar.gz directly
[root@mha-slave1] # tar xf zabbix-3.2.6.tar.gz-C / usr/local/src/
[root@mha-slave1 ~] # cd / usr/local/src/zabbix-3.2.6/
[root@mha-slave1 zabbix-3.2.6~] # groupadd zabbix-agent
[root@mha-slave1 zabbix-3.2.6~] # useradd-g zabbix-agent zabbix-agent-s / sbin/nologin
[root@mha-slave1 zabbix-3.2.6~] # / configure-- prefix=/usr/local/zabbix-agent-- enable-agent
[root@mha-slave1 zabbix-3.2.6~] # make install
Edit the configuration file:
[root@mha-slave1 zabbix-3.2.6~] # mkdir-p / data/zabbix/logs/
[root@mha-slave1 zabbix-3.2.6~] # chown-R zabbix-agent:zabbix-agent / data/zabbix/
[root@mha-slave1 zabbix-3.2.6~] # vim / usr/local/zabbix-agent/etc/zabbix_agentd.conf
LogFile=/data/zabbix/logs/zabbix_agentd.log
The address of Server=192.168.43.20 # zabbix-server. The client passively waits for zabbix-server to collect data.
ServerActive=192.168.43.20 # client initiatively sends data to zabbix-server
Hostname=192.168.43.12 # own ip
User=zabbix-agent # the user who modifies the program
UnsafeUserParameters=1 # allows custom keys
Or use the sed command to modify
[root@mha-slave1 ~] # sed-I's usr/local/zabbix-agent/etc/zabbix_agentd.conf LogFileholders exercise tmpUniqbixLogagentd.logpurLogFileholders / usr/local/zabbix-agent/etc/zabbix_agentd.conf
[root@mha-slave1 ~] # sed-I Universe 127.0.0.1 Universe 192.168.43.20 Universe g'/ usr/local/zabbix-agent/etc/zabbix_agentd.conf
[root@mha-slave1 ~] # sed-I Universe 127.0.0.1 Universe 192.168.43.20 Universe g'/ usr/local/zabbix-agent/etc/zabbix_agentd.conf
[root@mha-slave1] # sed-I 's/Hostname=Zabbix server/Hostname=192.168.43.12/' / usr/local/zabbix-agent/etc/zabbix_agentd.conf
[root@mha-slave1 ~] # sed-I's hand # User=zabbix/User=zabbix-agent/g' / usr/local/zabbix-agent/etc/zabbix_agentd.conf
[root@mha-slave1 ~] # sed-I's hand # UnsafeUserParameters=0/UnsafeUserParameters=1/g' / usr/local/zabbix-agent/etc/zabbix_agentd.conf
Set as system service, add soft connection
[root@mha-slave1~] # cd / usr/local/src/zabbix-3.2.6/
[root@mha-slave1 zabbix-3.2.6~] # cp misc/init.d/tru64/zabbix_agentd / etc/init.d/
[root@mha-slave1 zabbix-3.2.6~] # chmod + x / etc/init.d/zabbix_agentd
[root@mha-slave1 zabbix-3.2.6~] # ln-s / usr/local/zabbix-agent/sbin/ / usr/local/sbin/
[root@mha-slave1 zabbix-3.2.6~] # ln-s / usr/local/zabbix-agent/bin/ / usr/local/bin/
Set up self-startup
[root@mha-slave1 zabbix-3.2.6~] # vim / etc/init.d/zabbix_agentd
# add the following to the second line
# chkconfig: 2345 10 90
# description: zabbix agentd
Join the system service and boot up
[root@mha-slave1 zabbix-3.2.6~] chkconfig-- add zabbix_agentd
[root@mha-slave1 zabbix-3.2.6~] chkconfig zabbix_agentd on
[root@mha-slave1 zabbix-3.2.6~] service zabbix_agentd restart
Add groups and hosts
If you have more than one mysql to monitor, you can create a mysql group, then create a mysql host, and join the mysql group. If there is only one host, you do not need to create a group.
Create a group
Create a mysql group, and you can select other groups to specify a template for the mysql group
Add Host
Add templat
View Drawin
Monitor designated port
Zabbix monitors designated ports (such as mysql ports)
14:46:50 on October 14, 2018. I read 1454.
Copyright notice: this article is originally created by the blogger and may not be reproduced without the permission of the blogger. Https://blog.csdn.net/bacteriumX/article/details/83047122
Preface
To monitor whether the specified port is monitored to ensure the normal operation of the service.
Steps
Create a template
Create a template to monitor the mysql port
Specify the name of the template (cannot be in Chinese), and the group to which the template is applied
Configure-template-create monitoring item
Create a monitoring item
Configure-- Host-- (Select Host)-- Monitoring items-- create monitoring items
Enter the name of the monitoring item, select the type (note that the default type is Zabbix client, and select Zabbix client initiative if it is active monitoring), select the key value, and then enter the port to be monitored to create a new application set Port listen. Finally, click add:
Trigger Settin
Add trigger
Add a trigger to create a trigger, fill in the trigger name, set the severity, then click to fill in the expression, select the monitor item you just created, and click insert. Finally, click add to create a complete trigger.
Apply the template to the host
View the latest data
If you look at the latest data, you can see that the monitoring status is 1, that is, the port is listening.
View the alarm
Browsing the latest data, you can see that the monitoring status is 0, that is, the port is not listening.
Turn on mysql
State recovery:
DNS deployment
Primary DNS deployment
Experimental environment
Install the required software packages
[root@dns1 ~] # yum-y install bind*
Edit the main configuration file for DNS
[root@dns1 ~] # vim / etc/named.conf / / modify
Options {
Listen-on port 53 {any;}; / / allow all addresses to access the port
Listen-on-v6 port 53 {:: 1;}
Directory "/ var/named"; / / default location of zone data files
Dump-file "/ var/named/data/cache_dump.db"
Statistics-file "/ var/named/data/named_stats.txt"; / / specify the location of cache data statistics files
Memstatistics-file "/ var/named/data/named_mem_stats.txt"
Allow-query {any;}; allow all network segments to access
Edit area configuration
[root@dns1 ~] # vim / etc/named.rfc1912.zones / / add at the bottom
1. Edit forward parsing area
Zone "qingniao.com" in {# website domain name
Type master; # DNS of the primary type
File "qingniao.com.zone"; # filename
Allow-transfer {192.168.43.51;}; # from the DNS address
}
two。 Edit reverse parsing area
Zone "43.168.192.in-addr.arpa" in {# reverse IP address
Type master; # main type
File "192.168.43.arpa"; # filename
}
Configure the main area forward data file
[root@dns1 ~] # vim / var/named/qingniao.com.zone
: r / var/named/named.localhost / / enter the call template in the last line mode of the command
$TTL 1D
@ IN SOA qingniao.com. Admin.qingniao.com. (2019071917
1D
1H
1W
3H)
IN NS dns1.qingniao.com.
IN NS dns2.qingniao.com.
Dns1 IN A 192.168.43.50
Dns2 IN A 192.168.43.51
Discuz IN A 192.168.43.100
Zabbix IN A 192.168.43.20
Elk IN A 192.168.43.80
Edit the reverse region resolution file
[root@dns1 ~] # vim / var/named/192.168.43.arpa
$TTL 1D
@ IN SOA qingniao.com. Admin.qingniao.com. (2019071917
1D
1H
1W
3H)
IN NS dns1.qingniao.com.
IN NS dns2.qingniao.com.
50 IN PTR dns1.qingniao.com.
51 IN PTR dns2.qingniao.com.
100 IN PTR discuz.qingniao.com.
20 IN PTR zabbix.qingniao.com.
80 IN PTR elk.qingniao.com.
Check the configuration file
[root@dns1] # named-checkconf-z / etc/named.conf
Start named
[root@dns1 ~] # systemctl start named
Verify positive and negative DNS parsing
Deploy from DNS
Install the required software packages
[root@dns2 ~] # yum-y install bind*
Write the main configuration file
[root@dns2 ~] # vim / etc/named.conf / / modify
Options {
Listen-on port 53 {any;}; / / allow all addresses to access the port
Listen-on-v6 port 53 {:: 1;}
Directory "/ var/named"; / / default location of zone data files
Dump-file "/ var/named/data/cache_dump.db"
Statistics-file "/ var/named/data/named_stats.txt"; / / specify the location of cache data statistics files
Memstatistics-file "/ var/named/data/named_mem_stats.txt"
Allow-query {any;}; allow all network segments to access
Write a region file
[root@dns2 ~] # vim / etc/named.rfc1912.zones
1. Write forward region parsing file
Zone "qingniao.com" in {
Type slave; # type to be changed to slave type
Masters {192.168.43.50;}; # specify the main IP address file "slaves/qingniao.com.zone"; # automatically synchronize lijinming.com.zone files under slaves
}
two。 Write a reverse region
Zone "43.168.192.in-addr.arpa" in {
Type slave; # type changed to slave type
Masters {192.168.43.50;}; # specify the IP address of the primary DNS
File "slaves/192.168.43.arpa"; # slaves folder to automatically synchronize the reverse file of the main DNS
}
Start the service
[root@dns2 ~] # systemctl restart named
Modify the DNS you want to access
[root@dns2 ~] # vim / etc/sysconfig/network-scripts/ifcfg-ens33
What you want to access is added in the first item
[root@dns2 ~] # systemctl restart network
Check to see if the slaves folder synchronizes zone files
[root@dns2 ~] # cd / var/named/
[root@dns2 named] # ls
[root@dns2 named] # ls slaves/
Verification
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.