Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

ELK log analysis system

2025-01-17 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)06/02 Report--

ELK Log Analysis system ELK Log Analysis system brief introduction Log Server improves security defects in centralized storage of logs difficulty in log analysis

Collect data: LogstashAgent

Indexing: ElasticSearchCluster

Data can be viewed by: KilbanaServer

Simple result topology

ELK Log Analysis system Elasticsearch

Is the real-time full-text search and analysis engine Logstash

Kibana is a tool for collecting, analyzing and filtering logs

Is a graphical interface based on Web Log processing steps for searching, analyzing and visualizing log data stored in Elasticsearch metrics centralized management log formatting (Logstash) and output to Elasticsearch for indexing and storing (Elasticsearch) front-end data display (Kibana) 2, Elasticsearch introduction 1, Elasticsearch overview provides a distributed multi-user capability full-text search engine 2, The concept of Elasticsearch is close to the real-time cluster node index: index (library)-- > type (table)-- > document (record) fragmentation and copy 3. Logstash introduction 1. Logstash introduces a powerful data processing tool. Can achieve data transmission, format processing, format output data input, data processing (such as filtering, rewriting, etc.) and data output 2, LogStash main components ShipperIndexerBrokerSearch and StorageWeb Interface4, Kibana introduction 1, Kibana introduces an open source analysis and visualization platform for Elasticsearch to search, view data stored in Elasticsearch index for advanced data analysis and presentation 2, Kibana main functions Elasticsearch seamless integration of data Complex data analysis benefits more team members with flexible interfaces, easier to share and easier to configure. Visual multi-data source simple data export 5, deployment of ELK log analysis system 1, requirements description configuration ELK log analysis cluster using Logstash to collect logs using Kibana to view analysis logs 2, device list host operating system hostname / IP address main software server CentOS7-x86node1/192.168.45.128Elasticsearch, kibana server CentOS7-x86node2/192.168.45.129Elasticsearch server CentOS7-x86apache/192.168.45.133Logstatsh3, experimental topology

4. Prepare the installation environment to shut down the firewall and SelinuxNode1, Node2 node memory allocation 4G Apache node allocates 1G memory through VMware simulation network Vmnet8 connection step 1: configure ES node 1 server

1. Turn off firewall and security features

Systemctl stop firewalld.service setenforce 0

two。 Modify hostname profile

Vim / etc/hosts192.168.142.152 node1192.168.142.153 node2

3. Remotely mount resource pack

Mount.cifs / / 192.168.142.1/elk / mnt

4. Install the package

Cd / mntrpm-ivh elasticsearch-5.5.0.rpm

5. Load system services

Systemctl daemon-reload

6. Boot self-starting service

Systemctl enable elasticsearch.service

7. Backup configuration fil

Cd / etc/elasticsearch/cp / etc/elasticsearch/elasticsearch.yml / etc/elasticsearch/elasticsearch.yml.bak

8. Modify the elasticsearch main configuration file

Line 17 of vim / etc/elasticsearch/elasticsearch.yml#, uncomment and modify line 23 of cluster name cluster.name: my-elk-cluster#, uncomment line 33 of modification node name node.name: node1#, uncomment line 37 of modification data storage path path.data: / data/elk_data#, uncomment line 43 of modification log storage path path.logs: / var/log/elasticsearch/#, uncomment and modify Unlock memory bootstrap.memory_lock: false# line 55 when not starting, uncomment and modify the address, put in all addresses (0.0.0.0 for all addresses) network.host: 0.0.0.0 line 59, uncomment, release service port http.port: 920 lines 68, uncomment modify node name discovery.zen.ping.unicast.hosts: ["node1", "node2"]

9. Create a database storage path

Mkdir-p / data/elk_data

10. Storage path of authorization database

Chown elasticsearch:elasticsearch / data/elk_data/

11. Enable the elasticsearch service

Systemctl start elasticsearch.service

twelve。 View port service status

Netstat-ntap | grep 9200tcp6 00: 9200: * LISTEN 96970/java

13. Install the compilation environment

Yum install gcc gcc-c++ make-y

14. Extract the node Node package

Cd / mnttar zxvf node-v8.2.1.tar.gz-C / opt

15. Configure node

Cd / opt/node-v8.2.1/./configure

16. Compilation and installation

Make & & make install step 2: install the phantomjs front-end framework

1. Extract the phantomjs software package

Cd / mnttar jxvf phantomjs-2.1.1-linux-x86_64.tar.bz2-C / usr/local/src

two。 Switch directories to view phantomjs command

Cd / usr/local/src/phantomjs-2.1.1-linux-x86_64//binlsphantomjs

3. Copy the directory to the system directory

Cp phantomjs / usr/local/bin/ step 3: install elasticsearch-head data Visualization tool

1. Extract the elasticsearch-head software package

Cd / mnttar zxvf elasticsearch-head.tar.gz-C / usr/local/src

two。 Install elasticsearch-head data Visualization tool

Cd / usr/local/src/elasticsearch-head/

Npm install

3. Modify the elasticsearch main configuration file

Add the following to the last line of vim / etc/elasticsearch/elasticsearch.yml#: http.cors.enabled: truehttp.cors.allow-origin: "*"

4. Enable the elasticsearch service

Systemctl restart elasticsearch.service

5. Start running in the background

Npm run start &

6. View service port status

Netstat-ntap | grep 9100tcp 00 0.0.0.0 grep 9100tcp 9100 0.0.0.0 LISTEN 50105/grunt [root@node1 elasticsearch-head] # netstat-ntap | grep 9200tcp6 00:: 9200: * LISTEN 96970/java step 4: the configuration of ES Node 2 server is the same as Node 1 Just repeat the above operation! Step 5: use the browser to enter the 192.168.142.152 Drex 9100 URL and connect to the address of another node to check the health status of the cluster

Step 6: create an index

Go back to the overview and you can see the index created!

Step 7: configure the Apache server Install logstash to collect logs # install Apache service yum install-y httpd# remote mount resource pack mount.cifs / / 192.168.142.1/elk / mnt# switch to mount point cd / mnt# install logstashrpm-ivh logstash-5.5.1.rpm # boot self-start logstash service systemctl enable logstash.service# start logstash service systemctl start logstash.service# establish command soft link to system ln-s / usr/share/logstash/bin/logstash / usr / local/bin# switch log directory cd / var/log# Grant others read permission chmod messages # View permission ll# cut into the logstash configuration directory cd / etc/logstash/conf.d/# edit file vim system.conf# writes the following Used to collect system logs input {file {path = > "/ var/log/messages" type = > "system" start_position = > "beginning"} output {elasticsearch {# address points to node1 node hosts = > ["192.168.142.152 hosts 9200"] index = > "system-% {+ YYYY.MM.dd}"}} # Restart the service systemctl restart logstash.service step 8: view the collected log information

Step 9: go back to the node1 node to install kibana# cut in the mount point cd / mnt# install kibanarpm-ivh kibana-5.5.1-x86_64.rpm # cut into the kibana directory cd / etc/kibana/# backup kibana.yml file cp kibana.yml kibana.yml.bak# modify kibana.yml file vim kibana.yml# cancel the comment on line 2, release port 5601 server.port: 560 cancel the comment on line 7 and change the address Put in all addresses (0.0.0.0 for all addresses) server.host: "0.0.0.0" # uncomment line 21 and point to the urlelasticsearch.url of the node1 node: "http://192.168.142.152:9200"# uncomment line 30, release the kibana homepage kibana.index:" .kibana "# start the kibana service systemctl start kibana.service step 10: test kibana to show log data Use a browser to access 192.168.142.152purl 5601

Step 11: docking all Apache log files of the Apache host (operating on the Apache server) # Editing the Apache log configuration file vim apache_log.confinput {file {path = > "/ etc/httpd/logs/access_log" type = > "access" start_position = > "beginning"} file {path = > "/ etc/httpd/logs/error_log" Type = > "error" start_position = > "beginning"}} output {if [type] = = "access" {elasticsearch {hosts = > ["192.168.142.152 apache_access-% 9200"] index = > "apache_access-% {+ YYYY.MM.dd}"} if [type] = = "error" {elasticsearch {hosts = > ["192.168.142.152 index 9200"] index = > "apache_error-% {+ YYYY.MM.dd}"}} # restart the service Wait a minute! logstash-f apache_log.conf step 12: test the Apache log message display

The above is the whole content of ELK log analysis system, thank you for reading!

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report