In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-16 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)06/02 Report--
Log analysis is the main means for our operation and maintenance to solve system failures and find problems. In order to centrally manage the log records of multiple servers, open source real-time log analysis ELK platform applications, ELK consists of Elasticsearch, Logstash and Kibana three open source tools, these three tools can be deployed on different servers, and related to each other, but which server logs need to be collected, you must deploy Logstash on the server. ELK's official website is: https://www.elastic.co/cn/
.
ELK works as follows (if you understand how it works, you can deploy a highly available ELK platform):
Logstash collects the log generated by APPServer (application server) and stores it in the Elasticsearch cluster, while Kibana queries the data from the Elasticsearch cluster to generate charts and returns it to browser (browser). To put it simply, log processing and analysis generally requires the following steps:
Logstash centralizes log management. Logstash the log and output it to Elasticsearch. Elasticsearch indexes and stores formatted data. Kibana's presentation of front-end data.
Build an ELK platform:
1. Preparatory work:
1. Node1, Node2 node memory allocation at least 4G Apache node does not matter.
2. Download the required software package: https://pan.baidu.com/s/1aP7GDiRBdXzCZBbgiP1fAw
Extraction code: spuh
After copying this content, open the Baidu network disk mobile phone App, it is more convenient to operate, oh, three servers have to mount the .iso file provided by me.
3. Modify the name of the node server and configure domain name resolution through local / etc/hosts. Check the Java environment. It must be Java 1.8 or higher.
4. Release the traffic from the relevant ports of the firewall, in order to facilitate me to turn off the firewall directly.
5. The time of all servers should be synchronized. You can set up a time synchronization server or synchronize manually.
Node1 node configuration:
[root@node1 /] # hostname # modify hostname node1 [root@node1 /] # vim / etc/hosts # configuration resolution file Add the following two lines.. 192.168.1.10 node1192.168.1.20 node2 [root@node1 /] # java-version # check the Java environment openjdk version "1.8.0x102" OpenJDK Runtime Environment (build 1.8.0_102-b14) OpenJDK 64-Bit Server VM (build 25.102-b14, mixed mode)
Node2 node configuration (basically similar to node1):
[root@node2 /] # hostname # modify hostname node2 [root@node2 /] # vim / etc/hosts # write parsing configuration file.. 192.168.1.10 node1 # add 192.168.1.20 node2 # Add [root@node2 /] # java-version # to check the Java environment openjdk version "1.8.0o102" OpenJDK Runtime Environment (build 1.8.0_102-b14) OpenJDK 64-Bit Server VM (build 25.102-b14) Mixed mode)
Second, install Elasticsearch:
1. Node1 configuration:
[root@node1 /] # mount / dev/sr0 / media/ # Mount elk package [root@node1 /] # cd / media/ [root@node1 media] # cp * / usr/src/ # copy all files [root@node1 media] # cd / usr/src/ [root@node1 src] # rpm-ivh elasticsearch-5.5.0.rpm # install [root@node1 src] # systemctl daemon -reload [root@node1 src] # systemctl enable elasticsearch.service [root@node1 /] # vim / etc/elasticsearch/elasticsearch.yml # modify the main configuration file as follows Note to delete the comment symbol cluster.name: my-elk-cluster # Cluster name node.name: node1 # Node name path.data: / data/elk_data # data storage path path.logs: / var/log/elasticsearch/ # log storage path bootstrap.memory_lock: false # do not lock memory network.host during startup: 0.0.0.0 # provide the IP address of the service binding 0.0.0.0 represents all addresses http.port: 9200 # listening port discovery.zen.ping.unicast.hosts: ["node1" "node2"] # Cluster discovers unicast implementation [root@node1 /] # mkdir-p / data/elk_data # create data storage directory [root@node1 /] # chown elasticsearch:elasticsearch # change owner and group / data/elk_data/ [root @ node1 /] # systemctl start elasticsearch.service # qi'do startup service [root@node1 / ] # netstat-anpt | grep 9200 # port number will not be seen immediately Wait about ten seconds after starting the service before you can see tcp6 00: 9200:: * LISTEN 3992/java
2. Node2 configuration:
[root@node2 /] # mount / dev/sr0 / media/ # Mount elk package [root@node2 /] # cd / media/ [root@node2 media] # cp * / usr/src/ [root@node2 /] # cd / usr/src/ [root@node2 src] # rpm-ivh elasticsearch-5.5.0.rpm # install [root@node2 src] # systemctl daemon-reload [root@node2 src] # systemctl enable elasticsearch.service [root @ node2 /] # scp root@192.168.1.10:/etc/elasticsearch/elasticsearch.yml / etc/elasticsearch/ # copy the node1 configuration file to The authenticity of host '192.168.1.10 (192.168.1.10)' can't be established.ECDSA key fingerprint is 68:df:0f:ac:c7:75:df:02:88:7d:36:6a:1a:ae:27:23.Are you sure you want to continue connecting (yes/no)? YesWarning: Permanently added '192.168.1.10' (ECDSA) to the list of known hosts.root@192.168.1.10's password: # enter the password elasticsearch.yml 2853 2.8KB/s 00:00 [root@node2 /] # vim / etc/elasticsearch/elasticsearch.yml # slightly modify node.name: node2 # just change the node name The rest remain unchanged [root@node2 /] # mkdir-p / data/elk_data # create data storage directory [root@node2 /] # chown elasticsearch:elasticsearch / data/elk_data/ # set to belong to the master group [root@node2 /] # systemctl start elasticsearch.service [root@node2 /] # netstat-anpt | grep 9200tcp6 00:: 9200: * LISTEN 4074/java
3. View node information:
Node1:
Node2:
Visit http://192.168.1.10:9200/_cluster/health?pretty to view the health status of the cluster:
Visit http://192.168.1.10:9200/_cluster/state?pretty to view the status information of the cluster:
Viewing the cluster body in the above way is not friendly to us, and you can manage the cluster more easily by installing the Elasticsearch-head plug-in:
.
Install the Elasticsearch-head plug-in in node1 (you need to install node and phantomjs in advance):
[root@node1 /] # cd / usr/src/ [root@node1 src] # tar zxf node-v8.2.1.tar.gz # decompress the node source package [root@node1 src] # cd node-v8.2.1/ [root@node1 node-v8.2.1] #. / configure & & make & & make install # install for a long time It takes about 40 minutes for [root@node1 src] # tar jxf phantomjs-2.1.1-linux-x86_64.tar.bz2 [root@node1 src] # cd phantomjs-2.1.1-linux-x86_64/bin/ [root @ node1 bin] # cp phantomjs / usr/local/bin/ # to copy the file to the specified directory [root@node1 src] # tar zxf elasticsearch-head.tar.gz [root@node1 src] # cd elasticsearch-head/ [root @ node1 elasticsearch-head] # npm install # install dependency package [root@node1 elasticsearch-head] # vim / etc/elasticsearch/elasticsearch.yml # Edit the main configuration file Add the following two lines anywhere: http.cors.enabled: true # add the line, enable cross-domain access support http.cors.allow-origin: "*" # add the line Domain name address allowed for cross-domain access [root@node1 elasticsearch-head] # systemctl restart elasticsearch.service # restart service makes configuration effective [root@node1 elasticsearch-head] # npm run start & # setting service starts in the background. If the foreground starts, once the shutdown is interrupted, the service will be shut down. # and when starting the service, the service must be started under the unzipped elasticsearch-head, and the # process will read a file in this directory, otherwise the startup may fail. [root@node1 /] # netstat-anpt | grep 9100tcp 00 0.0.0.0 grep 9100tcp 9100 0.0.0.0 LISTEN 49967/grunt [root@node1 /] # netstat-anpt | grep 9200tcp6 00:: 9200:: * LISTEN 49874/java
Visit http://192.168.1.10:9100 through a browser to view cluster information:
4. Install Kibana (can be installed on a single server, because I am short of money and limited resources, so I install it on node1):
[root@node1 /] # cd / usr/src/ [root@node1 src] # rpm-ivh kibana-5.5.1-x86_64.rpm [root@node1 src] # systemctl enable kibana.service [root@node1 src] # vim / etc/kibana/kibana.yml.... / / omit part of the content server.port: 5601 # Kibana open port server.host: "0.0.0.0" # Kibana listening address 0.0.0.0 represents all addresses on the host elasticsearch.url: "http://192.168.1.10:9200" # and Elasticsearch establish a connection kibana.index:" .kibana "# add the .kibana index [root@node1 src] # systemctl start kibana.service to Elasticsearch
5. Configure the apache server (build your own website):
[root@Web ~] # systemctl start httpd # start the httpd service [root@Web ~] # java-version # check whether the Java environment is 1.8openjdk version "1.8.0o102" OpenJDK Runtime Environment (build 1.8.0_102-b14) OpenJDK 64-Bit Server VM (build 25.102-b14) Mixed mode) [root@Web ~] # mount / dev/cdrom / media # Mount the .iso file mount: / dev/sr0 write protection provided by me Mount [root@Web] # cd / media [root@Web media] # rpm-ivh logstash-5.5.1.rpm # install logstash [root @ Web media] # systemctl daemon-reload [root@Web media] # systemctl enable logstash.service # set boot self-boot [root] # cd / etc/logstash/conf.d/ # to the specified path [root] @ Web conf.d] # vim apache_log.conf # Editing and collecting apache log files and system logs # below take the system logs as an example to explain The rest are in the same format as input {file {path = > "/ var/log/messages" # specify the log file to be collected type = > "system" # specify the type as system, which can be customized Type values correspond to type in output {} start_position = > "beginning" # collect} file {path = > "/ etc/httpd/logs/access_log" type = > "access" start_position = > "beginning"} file {path = > "/ etc/httpd/logs/error_log" type = > "error" start_position = > "beginning"}} output {if [type] = = "system" {# if type is system Elasticsearch {# is output to Elasticsearch server hosts = > ["192.168.1.10 hosts 9200"] # Elasticsearch listening address and port index = > "system-% {+ YYYY.MM.dd}" # specify index format} if [type ] = "access" {elasticsearch {hosts = > ["192.168.1.10 apache_access-% 9200"] index = > "apache_access-% {+ YYYY.MM.dd}"} if [type] = = "error" {elasticsearch {hosts = > ["192.168.1.10 index 9200"] index = > "apache_error-% {+ YYYY.MM.dd}"} # after writing, save and exit. [root@Web] # chmod ovalr / var/log/messages # read permission given to others in this directory [root@Web conf.d] # ln-s / usr/share/logstash/bin/logstash / usr/local/bin/ # create command soft connection [root@Web conf.d] # systemctl start logstash # startup service [root@Web conf.d] # logstash-f apache_log.conf & # refers to Set the configuration file that has just been written as Logstash And run in the background.
6. Create an index:
1. Visit http://192.168.1.10:9100 through the browser to check whether the index is created:
There are several important concepts about indexing:
Index: similar to "library" in relational database; Type (type specified when writing Logstash): similar to "table" in relational database
In the index I just created, in the red box marked by me, you can see the corresponding positions of node1 and node2. 0, 1, 2, 3, 4 and 5 with a green background indicate that the index is divided into five shards. Because the Elasticsearch server specified in Logstash is node1, so the node1 shard is the main shard, the node2 will automatically synchronize the node1 shard, and the node2 shard is the backup shard, called a copy, which is used to provide data redundancy and load sharing. By default, Elasticsearch automatically shares the load on index requests.
.
Now log in to Kibana by visiting http://192.168.1.10:5601 and add the index:
You can add the error log index of apache by yourself. After adding, click "discover". Select an index in the drop-down list below. For example, apache_access-, can view the corresponding chart and log information:
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.