In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-18 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)06/03 Report--
1. Introduction of ELK log analysis system:
Log server:
1. Improve security
2. Centralized storage of logs
3. Defect: difficulty in analyzing the log
2. ELK log processing steps:
1. Centrally format the log
2. Logstash the log and output it to Elasticsearch
3. Index and store the formatted data (Elasticsearch)
4. Display of front-end data (Kibana)
ELK:Elasticsearch + Logstash + Kibana
ELK is short for Elasticsearch, Logstash, and Kibana, which are the core suites, but not all of them.
(1) Elasticsearch is a real-time full-text search and analysis engine, which provides three major functions of collecting, analyzing and storing data; it is a set of open REST and JAVA API structures that provide efficient search functions and scalable distributed systems. It is built on top of the Apache Lucene search engine library.
(2) Logstash is a tool for collecting, analyzing and filtering logs. It supports almost any type of log, including Syslog, error log, and custom application log. It can receive logs from many sources, including syslog, messaging (such as RabbitMQ), and JMX, which can output data in a variety of ways, including e-mail, websockets, and Elasticsearch.
(3) Kibana is a Web-based graphical interface for searching, analyzing and visualizing log data stored in Elasticsearch metrics. It uses Elasticsearch's REST interface to retrieve data, not only allowing users to create custom dashboard views of their own data, but also allowing them to query and filter data in a special way.
Features:
1. Seamless integration of Elasticsearch
2. Integrate data and analyze complex data.
3. The interface is flexible and sharing is easier.
4. Simple configuration and visualization of multiple data sources
5. Simple data export
6. Benefit more team members.
Second, build an ELK log analysis system:
Step 1: configure the elasticsearch environment first
(1) modify the two hostnames: node1 and node2
(2) modify hosts file:
(3) all firewalls are off.
Step 2: deploy and install elasticsearch software (required for both nodes)
(1) install:
(2) modify the matching document:
Note: the second node server configuration is the same as the first one, just change the node name and ip address.
(3) create a data storage path and authorize:
(4) start the service:
1. Enter the following URL in the browser to check the health status of the cluster:
2. Check the cluster status information:
Step 3: install the elasticsearch-head plug-in
(1) install the dependency package:
(2) compile and install node components:
(3) install the phantomjs front-end framework:
(4) install elasticsearch-head data visualization tools:
(5) modify the main configuration file:
(6) start elasticsearch-head
At this point, you can check the status of ports 9100 and 9200:
Step 4: create an index
You can create a new index directly:
You can also enter the following command to create an index:
Curl-XPUT '192.168.220.136 application/json'-XPUT' 192.168.220.136 application/json'-d'{"user": "zhangsan", "mesg": "hello world"}'/ / Index name is index-demo, type is test
After refreshing the browser, you will see the index information you just created. You can see that the index is fragmented by default, and there is a copy.
Step 5: install logstash and do some log collection and output to elasticsearch
(1) modify the host name
(2) install the Apache service:
(3) install logstash
(4) whether the functions of logstash (Apache) and elasticsearch (node) are normal, do the docking test:
You can test with the command logstash:
(5) the input adopts standard input and the output adopts standard output:
Logstash-e 'input {stdin {} output {elasticsearch {hosts= > ["192.168.220.136stdin 9200"]}}'
At this point, when the browser accesses http://192.168.220.136:9200/ to view the index information, there will be more logstash-2019.12.17.
(6) Log in to the Apache CVM and configure the interface:
The logstash configuration file mainly consists of three parts: input, output, and filter (whether or not to do this depending on the situation.
Restart the service:
(7) browsers view index information: there will be more system-2019.12.17
Step 6: install kibana on the node1 host
(1)
Browser visit: 192.168.220.136purl 5601
Next, create an index name in the visual interface: system-* (docking Syslog files)
(2) docking Apache log files of Apache hosts (including normal access and error)
Restart the service:
Then on the visual interface, create two indexes:
1 、 apache_access-
2 、 apache_error-
Wait a moment and you can see these two log files in Discover:
Because we have previously made synchronous backups for all node nodes, and also improved the ability of disaster recovery, a downtime will not cause data loss.
Thank you for watching!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.