In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-03-06 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >
Share
Shulou(Shulou.com)06/03 Report--
1. Introduction of ELK
Official website:
Https://www.elastic.co/cn/
Chinese guide
Https://legacy.gitbook.com/book/chenryn/elk-stack-guide-cn/details
ELK Stack (after version 5.0)-- > Elastic Stack is equivalent to ELK Stack+Beats
ELK Stack contains: Elaticsearch, Logstash, Kibana
Elasticsearch is a real-time full-text search and analysis engine, which provides three major functions of collecting, analyzing and storing data; it is a set of REST and JAVA API open and efficient search function, scalable distributed system. It is built on top of the Apache Lucene search engine library.
Logstash is used to collect (it supports almost any type of log, including system log, error log, and custom application log), and parses the log to json format and gives it to ElasticSearch
Kibana is a Web-based graphical interface for searching, analyzing, and visualizing log data stored in Elasticsearch metrics. It uses Elasticsearch's REST interface to retrieve data, not only allowing users to create custom dashboard views of their own data, but also allowing them to query and filter data in a special way.
Beats is a lightweight log collector. In the early ELK architecture, Logstash was used to collect and parse logs, but Logstash consumes more resources such as memory, CPU, IO and so on. Compared with Beates, Beates takes up system CPU and memory is basically negligible
X-pack provides an extension package for Elastic Stack that integrates security, alarm, monitoring and reporting. This component is free of charge, not open source II or ELK architecture.
Third, ELK installation environment preparation 1. Configure nodes to resolve [root@node-11 ~] # cat / etc/hosts127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4::1 localhost localhost.localdomain localhost6 localhost6.localdomain610.71.11.1 node-110.71.11.2 node-210.71.11.11 node-11 to each other
two。 Install jdk on each node
[root@node-11 ~] # yum install-y java-1.8.0-openjdk
View jdk version
[root@node-1] # java-versionjava version "1.8.0mm 161" Java (TM) SE Runtime Environment (build 1.8.0_161-b12) Java HotSpot (TM) 64-Bit Server VM (build 25.161-b12, mixed mode)
Special note: currently, logstash does not support java9
Install Elasticsearch
Note: all three nodes execute the following command
Import key
Rpm-- import https://artifacts.elastic.co/GPG-KEY-elasticsearch
Configure yum Feed
[root@node-1 ~] # vi / etc/yum.repos.d/elastic.repo [elasticsearch-6.x] name=Elasticsearch repository for 6.x packagesbaseurl= https://artifacts.elastic.co/packages/6.x/yumgpgcheck=1gpgkey=https://artitacts.elastic.co/GPG-KEY-elasticsearchenabled=1autorefresh=1type=rpm-md
Update cach
Yum makecache
Considering the slow download speed of the software package, use rpm package to install elasticsearch
Download address of rpm:
Https://www.elastic.co/downloads/elasticsearch
Upload the downloaded rpm package to the node and install
Rpm-ivh elasticsearch-6.2.3.rpm
Edit / etc/elasticsearch/elasticsearch.yml, add or modify the following parameters
# # define elk cluster name, node name cluster.name: cluster_elk node.name: node-1 node.master: truenode.data: false# definition hostname IP and port network.host: 10.71.11.1http.port: 9200cluster # define cluster node discovery.zen.ping.unicast.hosts: ["node-1", "node-2", "node-11"]
Copy the configuration file / etc/elasticsearch/elasticsearch.yml on node-1 to node-2 and node-11
[root@node-1 ~] # scp! $node-2:/tmp/scp / etc/elasticsearch/elasticsearch.yml node-2:/tmp/elasticsearch.yml 3001 3.6MB/s 00:00 [root@node-1 ~] # scp / etc/elasticsearch / elasticsearch.yml node-11:/tmp/root@node-11's password:elasticsearch.yml [root@node-11 yum.repos.d] # cp / tmp/elasticsearch.yml / etc/elasticsearch/elasticsearch.ymlcp: overwrite'/ etc/elasticsearch/elasticsearch.yml'? Y [root @ node-11 yum.repos.d] # vim / etc/elasticsearch/elasticsearch.yml
Edit / etc/elasticsearch/elasticsearch.yml on node-2
#-Cluster-# # Use a descriptive name for your cluster:##cluster.name: my-applicationcluster.name: cluster_elk#- -Node-- # # Use a descriptive name for the node:##node.name: node-1node.name: node-2node.master: falsenode.data: true# Add custom attributes to the node:##node.attr.rack: R1 broadcast #- -Paths-# # Path to directory where to store the data (separate multiple locations by comma): # path.data: / var/lib/elasticsearch## Path to log files:#path.logs: / var/log/elasticsearch##- -Network-- # # Set the bind address to a specific IP (IPv4 or IPv6): # network.host: 10.71.11.The Set a custom port for HTTP:#http.port: 9200 million # For more information Consult the network module documentation.##-- Discovery-- # # Pass an initial list of hosts to perform discovery when new node is started:# The default list of hosts is ["127.0.0.1" "[: 1]"] # # discovery.zen.ping.unicast.hosts: ["host1", "host2"] * * discovery.zen.ping.unicast.hosts: ["node-1", "node-2", "node-11"] * * # Prevent the "split brain" by configuring the majority of nodes (total number of master-eligible nodes / 2 + 1): # # discovery.zen.minimum_master_nodes: # # For more information Consult the zen discovery module documentation.##-Gateway--
Modify the / etc/elasticsearch/elasticsearch.yml configuration file on node-11 in the same way
Start elasticsearch on node-1
[root@node-1] # systemctl start Elasticsearch [root @ node-1 ~] # systemctl status elasticsearch ● elasticsearch.service-Elasticsearch Loaded: loaded (/ usr/lib/systemd/system/elasticsearch.service; enabled; vendor preset: disabled) Active: active (running) since Thu 2018-04-12 21:11:28 CST 12s ago Docs: http://www.elastic.coMain PID: 17297 (java) Tasks: 67 Memory: 1.2G CGroup: / system.slice/elasticsearch.service └─ 17297 / bin/java-Xms1g-Xmx1g-XX:+UseConcMarkSweepGC-XX:CMSInitiatingOccupancyFraction=75-XX:+UseCMSInitiatingOccupancyOnly-XX:+AlwaysPre...Apr 12 21:11:28 node-1 systemd [1]: Started Elasticsearch.Apr 12 21:11:28 node-1 systemd [1]: Starting Elasticsearch...
View cluster logs
[root@node-1] # tail-f / var/log/elasticsearch/cluster_elk.log [2018-04-12T21:11:34704] [INFO] [o.e.d.DiscoveryModule] [node-1] using discovery type [zen] [2018-04-12T21:11:35187] [INFO] [o.e.n.Node] [node-1] initialized [2018-04-12T21:11:35187] [INFO] [o .e.n.Node] [node-1] starting... [2018-04-12T21:11:35370] [INFO] [o.e.t.TransportService] [node-1] publish_address {10.71.11.1 INFO 9300} Bound_addresses {10.71.11.1 INFO 9300} [2018-04-12T21:11:35380] [INFO] [o.e.b.BootstrapChecks] [node-1] bound or publishing to a non-loopback address, enforcing bootstrap checks [2018-04-12T21:11:38423] [INFO] [o.e.c.s.MasterService] [node-1] zen-disco-elected-as-master ([0] nodes joined) Reason: new_master {node-1} {PVxBZmElTXOHkzavFVFEnA} {xsTmwB7MTwu-8cwwALyTPA} {10.71.11.1} [2018-04-12T21:11:38428] [INFO] [o.e.c.s.ClusterApplierService] [node-1] new_master {node-1} {xsTmwB7MTwu-8cwwALyTPA} {10.71.11.1} {10.71.11.1 Reason: apply cluster state (from master [master {node-1} {PVxBZmElTXOHkzavFVFEnA} {xsTmwB7MTwu-8cwwALyTPA} {10.71.11.1} {10.71.11.1 from master 9300} committed version [1] source [zen-disco-elected-as-master ([0] nodes joined)]) [2018-04-12T21:11:38442] [INFO] [o.e.h.n.Netty4HttpServerTransport] [node-1] publish_address {10.71.11.1} Bound_addresses {10.71.11.1 INFO 9200} [2018-04-12T21:11:38442] [INFO] [o.e.n.Node] [node-1] started [2018-04-12T21:11:38449] [INFO] [o.e.g.GatewayService] [node-1] recovered [0] indices into cluster_state
Check the health status of the cluster on the master node
[root@node-1 ~] # curl '10.71.11.1 cluster_name healthcare provision {"cluster_name": "cluster_elk", "status": "green", "timed_out": false, "number_of_nodes": 1, "number_of_data_nodes": 0, "active_primary_shards": 0, "active_shards": 0, "relocating_shards": 0 "initializing_shards": 0, "unassigned_shards": 0, "delayed_unassigned_shards": 0, "number_of_pending_tasks": 0, "number_of_in_flight_fetch": 0, "task_max_waiting_in_queue_millis": 0, "active_shards_percent_as_number": 100.0}
View the cluster details at the node-1 point
[root@node-1 ~] # curl '10.71.11.1 cluster_name: "cluster_elk", "compressed_size_in_bytes": 226, "version": 2, "state_uuid": "- LLN7fEYQJiKZSLqitdOvQ", "master_node": "PVxBZmElTXOHkzavFVFEnA", "blocks": {} "nodes": {"PVxBZmElTXOHkzavFVFEnA": {"name": "node-1", "ephemeral_id": "xsTmwB7MTwu-8cwwALyTPA", "transport_address": "10.71.11.1name 9300", "attributes": {}, "metadata": {"cluster_uuid": "LaaRmRfRTfOY-ApuNz_nfA", "templates": {} "indices": {}, "index-graveyard": {"tombstones": []}}, "routing_table": {"indices": {}}, "routing_nodes": {"unassigned": [], "nodes": {}, "snapshots": {"snapshots": []} "restore": {"snapshots": []}, "snapshot_deletions": {"snapshot_deletions": []}} install kibana
Note: execute on the node-1 node
Yum install-y kibana
Description: installation using yum is relatively slow, so install using rpm package
Download kibana-6.2.3-x86_64 .rpm and upload it to the node-1 node to install kibana
Https://www.elastic.co/downloads/kibana[root@node-1 ~] # rpm-ivh kibana-6.2.3-x86_64.rpmPreparing... # # [100%] package kibana-6.2.3-1.x86_64 is already installed
Edit / etc/kibana/kibana.yml
Server.port: 5601 # # configure listening port, default listening port 5601 server.host: "10.71.11.1" # configure service hostname or IP. It should be noted that if the x-pack component is not installed, the kibana login user and password cannot be set, and if the IP here is configured with public network IP, anyone can log in to kibana, if the IP configured here is private network IP and port. To ensure that you can log in to kibana from the public network, you can use nginxu as a proxy to implement elasticsearch.url: "http://10.71.11.1:9200" # # configure kibana and elasticsearch communication logging.dest: / var/log/kibana.log # # by default, the log of kibana is under / var/log/message/. You can also customize the kibana.log path / var/log/kibana.log
Start the kibana service
[root@node-1 ~] # systemctl start Kibana [root @ node-1 ~] # ps aux | grep kibanakibana 650 109 0.0 944316 99684? Rsl 10:59 0:02 / usr/share/kibana/bin/../node/bin/node-- no-warnings / usr/share/kibana/bin/../src/cli-c / etc/kibana/kibana.ymlroot 659 0.0 112660 976 pts/6 S + 10:59 0:00 grep-- color=auto kib
Access Kibana: http://10.71.11.1:5601/ on the browser
Install logstash
Note: without special envoy's instructions, the following operations are done on node-2
Download logstash-6.2.3 .rpm and upload it to node-2
Https://www.elastic.co/downloads/logstash
Install the logstash service
[root@node-2 ~] # ls logstash-6.2.3.rpmlogstash-6.2.3.rpm [root@node-2 ~] # rpm-ivh logstash-6.2.3.rpmPreparing... # # [100%] Updating / installing... 1:logstash-1:6.2.3-1 # # [100%] Using provided startup.options file: / etc/logstash/startup.optionsSuccessfully created system startup script for Logstash configuration logstash Collection syslog Log Editing / etc/logstash/conf.d/syslog.confinput {syslog {type = > "system-syslog" port = > 10514}} output {stdout {codec= > rubydebug}}
Detect configuration file syntax errors
[root@node-2 ~] # cd / usr/share/logstash/bin/ [root @ node-2 bin] #. / logstash-- path.settings / etc/logstash/-f / etc/logstash/conf.d/syslog.conf-- config.test_and_exit Sending Logstash's logs to / var/log/logstash which is now configured via log4j2.propertiesConfiguration OK
Parameter description:
-- path.settings / etc/logstash/ specifies the logstash configuration file path
-f specify a custom profile
Check whether the 10514 listening port is open
Edit / etc/rsyslog.conf and add the following configuration in # RULES####
[root@node-2 ~] # vi / etc/rsyslog.conf*.* @ @ 10.71.11.2pur10514
After executing the logstash startup command, the command line terminal will not return data, which is related to configuring the functions defined by etc/logstash/conf.d/syslog.conf
At this point, you need to re-copy the ssh terminal of node-2 and restart rsyslog.service in the new ssh terminal.
[root@node-2 ~] # systemctl restart rsyslog.service
After the new ssh terminal executes the ssh node-2 command
You will see log information output in the ssh terminal of another node-2, indicating that the configuration of logstash to collect system logs was successful.
The following actions are performed in node-2
Edit / etc/logstash/conf.d/syslog.conf
Input {syslog {type = > "system-syslog" port = > 10514}} output {elasticsearch {hosts = > ["10.71.11.1 system-syslog 9200"] index = > "system-syslog-% {+ YYY.MM}" # # define index}}
Verify that the configuration file syntax is incorrect
[root@node-2] # / logstash-- path.settings / etc/logstash/-f / etc/logstash/conf.d/syslog.conf-- config.test_and_exit
Modify logstash directory permissions owner and group
[root@node-2 bin] # chown-R logstash / var/lib/logstash
Because the logstash service process takes some time, when the service starts successfully, ports 9600 and 10514 will be monitored
Description: logstash service log path
/ var/log/logstash/logstash-plain.log configure collected logs on Kibana
Check the data index on elasticsearch first.
Edit / etc/logstash/logstash.yml on node-2 to add
Http.host: "10.71.11.2"
Execute the following command on node-1 to get index information
[root@node-1 ~] # curl '10.71.11.1:9200/_cat/indices?v'health status index uuid pri rep docs.count docs.deleted store.size pri.store.sizeyellow open system-syslog-2018.04 3Za0b5rBTYafhsxQ-A1P-g 5 1
Description: index is generated successfully, indicating that the communication between es and logstash is normal.
Get the details of the index
[root@node-1 ~] # curl '10.71.11.1 error: {"root_cause": [{"type": "index_not_found_exception", "reason": "no such index", "resource.type": "index_or_alias", "resource.id": "indexname" "index_uuid": "_ na_", "index": "indexname"}], "type": "index_not_found_exception", "reason": "no such index", "resource.type": "index_or_alias", "resource.id": "indexname", "index_uuid": "_ na_" "index": "indexname"}, "status": 404} collect nginx logs configuration use Beats to collect logs
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.