In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-04-06 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)06/02 Report--
Log server improving security centralized log storage defect: difficulty in log analysis
ELK log analysis system Elasticsearch: storage, index pool Logstash: log collector Kibana: data visualization log processing step 1, centralized management of logs 2, Logstash and output to Elasticsearch3, indexing and storage of formatted data (Elasticsearch) 4 Front-end data presentation (Kibana) Elasticsearch overview provides a distributed multi-user capability of the full-text search engine Elasticsearch concept close to real-time cluster node index: index (library)-> type (table)-> document (record) fragmentation and copy Logstash introduces a powerful data processing tool that can achieve data transmission, format processing, formatting output data input, data processing (such as filtering) (rewrite, etc.) and data output LogStash main components Shipper Indexer Broker Search and Storage Web InterfaceKibana introduces an open source analysis and visualization platform for Elasticsearch to search and view the data stored in the Elasticsearch index for advanced data analysis through various charts and display the main functions of Kibana Elasticsearch seamless integration of data, complex data analysis benefits more team members with flexible interfaces, easier to share and easier to configure Visual Multi-data Source simple data derivation Experimental Topology
Lab environment Apache server 192.168.13.128 (Logstash) Node1 server 192.168.13.129 (Elasticsearch,Kibana) Node2 server 192.168.13.130 (Elasticsearch) 1, in node1 Install Elasticsearch [root @ node1 ~] # vim / etc/hosts # # configure the resolution name 192.168.13.129 node1192.168.13.130 node2 [root@node1 ~] # java-version # # to see if it supports Java [root ~] # mount.cifs / / 192.168.100.3/LNMP-C7 / mnt/Password for root@//192.168.100.3/LNMP-C7: [root@node1 mnt] # cd / mnt/elk/ [root @ node1 elk] # rpm-ivh elasticsearch-5.5.0.rpm # # install [root@node1 elk] # systemctl daemon-reload # # reload daemon [root@node1 elk] # systemctl enable elasticsearch.service # # Boot self-boot [root@node1 elk] # cd / etc/elasticsearch/ [root@node1 elasticsearch] # cp elasticsearch.yml elasticsearch.yml.bak # # backup [root@node1 elasticsearch] # vim elasticsearch.yml # # modify configuration file cluster.name: my -elk-cluster # # Cluster name node.name: node1 # # Node name The second node is node2path.data: / data/elk_data # # data location path.logs: / var/log/elasticsearch/ # # Log location bootstrap.memory_lock: false # # do not lock memory at startup network.host: 0.0.0.0 # # provide the IP address bound to the service For all addresses http.port: 9200 # # Port number is 9200discovery.zen.ping.unicast.hosts: ["node1" "node2"] # # Cluster discovers unicast implementation [root@node1 elasticsearch] # mkdir-p / data/elk_data # # create data store point [root@node1 elasticsearch] # chown elasticsearch.elasticsearch / data/elk_data/ # # give permission [root@node1 elasticsearch] # systemctl start elasticsearch.service # # enable service [root@node1 elasticsearch] # netstat-ntap | grep 9200 # # check whether tcp6 00: 9200 :: * LISTEN 2166/java
2, check your health and status on the browser
3, in node1 Install node component dependency package on node2 [root@node1 elasticsearch] # yum install gcc gcc-c++ make-y # # install compilation tool [root@node1 elasticsearch] # cd / mnt/elk/ [root@node1 elk] # tar zxvf node-v8.2.1.tar.gz-C / opt/ # # extract plug-in [root@node1 elk] # cd / opt/node-v8.2.1/ [root@node1 node-v8.2.1] #. / configure # # configuration Set [root@node1 node-v8.2.1] # make & & make install # # to compile and install 4 Install the phantomjs front-end framework [root@node1 elk] # tar jxvf phantomjs-2.1.1-linux-x86_64.tar.bz2-C / usr/local/src/## on node1,node2 and extract it to / usr/local/src [root@node1 elk] # cd / usr/local/src/phantomjs-2.1.1-linux-x86_64/bin/ [root @ node1 bin] # cp phantomjs / usr/local/bin/ # # compiler recognition 5, in node1 Install elasticsearch-head data Visualization on node2 [root@node1 bin] # cd / mnt/elk/ [root@node1 elk] # tar zxvf elasticsearch-head.tar.gz-C / usr/local/src/ # # decompress [root@node1 elk] # cd / usr/local/src/elasticsearch-head/ [root@node1 elasticsearch-head] # npm install # # install 6 Modify the configuration file [root@node1 elasticsearch-head] # vim / etc/elasticsearch/elasticsearch.yml## to add http.cors.enabled: true # # enable cross-domain access support Default is falsehttp.cors.allow-origin: "*" # # Domain address allowed for cross-domain access [root@node1 elasticsearch-head] # systemctl restart elasticsearch.service # # restart [root@node1 elasticsearch-head] # cd / usr/local/src/elasticsearch-head/ [root@node1 elasticsearch-head] # npm run start & # # running data visualization service [1] 82515 [root@node1 elasticsearch-head] # netstat-ntap | grep 9100tcp 00 0.0.0.0 ntap 9100 0.0.0.0 LISTEN 82525/grunt [root@node1 elasticsearch-head] # netstat-ntap | grep 9200tcp6 00:: 9200:: * LISTEN 82981/java 7 View the status of health values on the browser
8, create an index on node1
[root@node2 ~] # curl-XPUT 'localhost:9200/index-demo/test/1?pretty&pretty'-H' content-Type: application/json'-d'{"user": "zhangsan", "mesg": "hello world"}'# # create index information
9, install logstash on the Apache server Multi-elasticsearch docking [root@apache ~] # yum install httpd-y # # installation service [root@apache ~] # systemctl start httpd.service # # launch service [root@apache ~] # java-version [root@apache ~] # mount.cifs / / 192.168.100.3/LNMP-C7 / mnt/ # # mount Password for root@//192.168.100.3/LNMP-C7: [root@apache ~] # cd / mnt/elk/ [root@ Apache elk] # rpm-ivh logstash-5.5.1.rpm # # install logstash [root @ apache elk] # systemctl start logstash.service [root@apache elk] # systemctl enable logstash.service # # self-boot [root@apache elk] # ln-s / usr/share/logstash/bin/logstash / usr/local/bin/ # # make it easy for the system to identify [root@apache elk] # logstash- e 'input {root {}} output {stdout {}' # standard input Output The stdin plugin is now waiting for input:16:58:11.145 [Api Webserver] INFO logstash.agent-Successfully started Logstash API endpoint {: port= > 9600} www.baidu.com # # input 2019-12-19T08:58:35.707Z apache www.baidu.comwww.sina.com.cn # # input 2019-12-19T08:58:42.092Z apache www.sina.com.cn [root@apache elk] # logstash-e 'input {stdin {} output {stdout {codec= > rubydebug}' # # display detailed output using rubydebug Codec is a codec The stdin plugin is now waiting for input:17:03:08.226 [Api Webserver] INFO logstash.agent-Successfully started Logstash API endpoint {: port= > 9600} www.baidu.com # # formatting {"@ timestamp" = > 2019-12-19T09:03:13.267Z, "@ version" = > "1", "host" = > "apache" "message" = > "www.baidu.com"} [root@apache elk] # logstash-e 'input {stdin {} output {elasticsearch {hosts= > ["192.168.13.129 logstash 9200"]}' # # use logstach to write information to elasticsearch The stdin plugin is now waiting for input:17:06:46.846 [Api Webserver] INFO logstash.agent-Successfully started Logstash API endpoint {: port= > 9600} www.baidu.com # # input information www.sina.com.cn10 Use a browser to view information
# # you can view information in data browsing
eleven, Output Syslog files to Elasticsearch [root @ apache elk] # chmod vim / var/log/messages # # to other users to read [root@apache elk] # vim / etc/logstash/conf.d/system.conf # # create a file input {file {root = > "/ var/log/messages" # # output directory type = > "system" Start_position = > "beginning"}} output {elasticsearch enter address pointing to node1 node hosts = > ["192.168.13.129output 9200"] index = > "system-% {+ YYYY.MM.dd}"} [root@apache elk] # systemctl restart logstash.service # # Restart the service # # you can also use data browsing to view details
twelve, Install kibana data Visualization [root@node1] # cd / mnt/elk/ [root@node1 elk] # rpm-ivh kibana-5.5.1-x86_64.rpm # # install [root@node1 elk] # cd / etc/kibana/ [root@node1 kibana] # cp kibana.yml kibana.yml.bak # # backup [root@node1 kibana] # vim kibana.yml # # modify configuration file server.port: 5601 # # Port number on the node1 server Server.host: "0.0.0.0" # # listen on any network segment elasticsearch.url: "http://192.168.13.129:9200" # # Native node address kibana.index:" .kibana "# # Index name [root@node1 kibana] # systemctl start kibana.service # # enable service [root@node1 kibana] # systemctl enable kibana.service 13 Browsers access kibana
14. Docking apache log files in apache server Make statistics [root@apache elk] # vim / etc/logstash/conf.d/apache_log.conf # # create configuration file input {file {path = > "/ etc/httpd/logs/access_log" # # input information type = > "access" start_position = > "beginning"} File {path = > "/ etc/httpd/logs/error_log" type = > "error" start_position = > "beginning"}} output {if [type] = = "access" {# # determine the output information elasticsearch {hosts = > [ "192.168.13.129 YYYY.MM.dd 9200"] index = > "apache_access-% {+ YYYY.MM.dd}"} if [type] = = "error" {elasticsearch {hosts = > ["192.168.13.129 if 9200"] index = > "apache_error-% {+ YYYY" .MM.dd} "}} [root@apache elk] # logstash-f / etc/logstash/conf.d/apache_log.conf # # configure logstach15 according to the configuration file Visit the web page information to view kibana statistics
# # Select management > Index Patterns > create index patterns## to create two apache logs
Thank you for reading!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.