In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-18 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)06/03 Report--
What this article shares with you is the tutorial of ELKstack to build a micro-service log center. I believe most people don't know how to deploy it yet. In order to let you learn, I summarize the following content for you.
Introduction to ELK
What is ELK? Generally speaking, ELK is a combination of three open source software, Elasticsearch, Logstash and Kibana. Each of these three software is used to perform different functions. ELK is also known as ELKstack. The official website https://www.elastic.co/, ELK has the following main advantages:
1. Flexible processing method: elasticsearch is a real-time full-text index with powerful search function.
2. The configuration is relatively simple: all elasticsearch uses JSON interface, logstash uses module configuration, and the configuration file of kibana is simpler.
3. High retrieval performance: based on excellent design, although each query is real-time, it can also achieve the query second response of tens of billions of data.
4. Cluster linear expansion: both elasticsearch and logstash can extend linearly flexibly.
5. The front-end operation is gorgeous: the front-end design of kibana is gorgeous, and the operation is simple.
Elasticsearch
Elasticsearch is a highly scalable full-text search and analysis engine based on Apache Lucene. It can store, search and analyze large-capacity data in near real time, and can deal with large-scale log data, such as Nginx, Tomcat, Syslog and so on.
Logstash
Data collection engine. It supports dynamic collection of data from various data sources, filtering, analysis, enrichment, uniform format and other operations, and then stored to the location specified by the user; supports ordinary log, custom json format log parsing.
Kibana
Data analysis and visualization platform. It is often used in conjunction with Elasticsearch to search, analyze, and display the data in a statistical chart.
Filebeat
First of all, a component that filebeat does not belong to ELK is one of the optional components that serve ELK. Because a fatal problem with Logstash is its performance and resource consumption (the default heap size is 1GB). Although its performance has improved greatly in recent years, it is still much slower than its replacements. As a member of the Beats family, Filebeat is a lightweight log transfer tool, which makes up for the shortcomings of Logstash: Filebeat, as a lightweight log transfer tool, can push logs to the central Logstash. Filebeat is just a binary file without any dependencies. It takes up very few resources, although it is still very young, officially because it is simple, so there is almost nothing can go wrong, so its reliability is still very high. It also provides us with a lot of adjustable points, such as how it searches for new files and when to close the file handle when the file hasn't changed for a while.
Start deployment system environment check
1. Because the elasticsearch service requires a java environment to run, the java environment in the server should be checked first. Jdk version is recommended at least 1.8 or above
[root@iZbp136dr1iwajle0r9j83Z soft] # java-versionjava version "1.8.0mm 221" Java (TM) SE Runtime Environment (build 1.8.0_221-b11) Java HotSpot (TM) 64-Bit Server VM (build 25.221-b11, mixed mode)
2. Create a user. Because elasticsearch cannot be run under a root account, create a user again.
[root@iZbp136dr1iwajle0r9j83Z ~] # useradd elk
3. Create a folder
[root@iZbp136dr1iwajle0r9j83Z ~] # su-elk # switch to elk user [elk@iZbp136dr1iwajle0r9j83Z ~] $mkdir soft # to store the original package file [elk@iZbp136dr1iwajle0r9j83Z ~] $mkdir applications # to store the deployment file download ELK file
Download process to the relevant official website to download it, note that the download version had better be able to correspond, otherwise there will be some inexplicable problems.
[root@iZbp136dr1iwajle0r9j83Z soft] # ll-l total 708484 RW Jan RW Jan RWML-1 app app 290094012 Jan 2 14:31 elasticsearch-7.5.1-linux-x86_64.tar.gz-rw-rw-r-- 1 app app 24086235 Jan 2 14:26 filebeat-7.5.1-x86_64.rpm-rw-rw-r-- 1 app app 238481011 Jan 2 13:28 kibana-7.5.1-linux-x86_64.tar.gz-rw-rw-r-- 1 app App 172809530 Jan 2 13:27 logstash-7.5.1.zip
This is the file I downloaded, and it belongs to the app user in terms of file properties. In order to facilitate unified management, all these files are now transferred to elk users
[root@iZbp136dr1iwajle0r9j83Z soft] # chown-R elk:elk * [root@iZbp136dr1iwajle0r9j83Z soft] # ll-l total 708484 RW Jan RW filebeat-7.5.1-x86_64.rpm-rw-rw-r---1 elk elk 290094012 Jan 2 14:31 elasticsearch-7.5.1-linux-x86_64.tar.gz-rw-rw-r-- 1 elk elk 24086235 Jan 2 14:26 filebeat-7.5.1-x86_64.rpm-rw-rw-r-- 1 elk elk 238481011 Jan 2 13:28 kibana-7.5.1-linux-x86_64 .tar.gz-rw-rw-r-- 1 elk elk 172809530 Jan 2 13:27 logstash-7.5.1.zip
Extract the file to the applications folder
# processing elasticsearch-7.5.1 [elk@iZbp136dr1iwajle0r9j83Z soft] $tar-zxvf elasticsearch-7.5.1-linux-x86_ 64.tar.gz [elk @ iZbp136dr1iwajle0r9j83Z soft] $mv elasticsearch-7.5.1.. / applications/# processing logstatsh-7.5.1.zip [elk@iZbp136dr1iwajle0r9j83Z soft] $unzip logstatsh-7.5.1.zip [elk@iZbp136dr1iwajle0r9j83Z soft] $mv logstash-7.5.1.. / applications/# processing kibana-7.5.1 [ Elk@iZbp136dr1iwajle0r9j83Z soft] $tar-zxvf kibana-7.5.1-linux-x86_ 64.tar.gz [elk @ iZbp136dr1iwajle0r9j83Z soft] $mv kibana-7.5.1-linux-x86_64.. / applications/ deploy elasticsearch
Modify the elasticsearch.yml file
[elk@iZbp136dr1iwajle0r9j83Z config] $vim elasticsearch.yml# Cluster name cluster.name: els# Node name node.name: els-1# data Storage path path.data: / data/els/data# Log Storage path path.logs: / data/logs/els/log# binding IP address network.host: 172.16.240.Port number http.port: 7008discovery.seed_hosts: ["172.16.240.19"] cluster.initial_master_ Nodes: ["els-1"] # allow cross-domain access (kibana needs to be open when obtaining data) http.cors.enabled: truehttp.cors.allow-origin:'*'
Create a data directory
Mkdir / data/elsmkdir / data/logs/els
Start elasticsearch
[elk@iZbp136dr1iwajle0r9j83Z soft] $cd / home/elk/applications/elasticsearch-7.5.1/bin/ [elk@iZbp136dr1iwajle0r9j83Z soft] $. / elasticsearch
Visit elasticsearch_ip:port to see if it starts properly.
Deploy kibana
Edit configuration file
Cd / home/elk/applications/kibana-7.5.1-linux-x86_64/config# modify configuration server.port: 7011server.host: "0.0.0.0" server.name: "kibana-server" elasticsearch.url: "http://172.16.240.19:7008"kibana.index:" .kibana "i18n.locale:" zh-CN "
Start kibana
Cd / home/elk/applications/kibana-7.5.1-linux-x86_64/bin./kibana
Visit kibana_ip:port
Deploy logstatsh
Since the post-log data is collected through filebeat, a separate configuration file is created for filebeat to process
Cd / home/elk/logstash-7.5.1/configvim beat.confinput {# receives data read by filebeat beats {port = > 7110 codec = > "json"} output {# output to es elasticsearch {hosts = > ["172.16.240.19 filebeat 7008"] index = > "cloud" document_type = > "log" manage_template = > false}}
Test whether the configuration file syntax is correct
Cd / home/elk/applications/logstash-7.5.1/bin/./logstash-f / home/elk/applications/logstash-7.5.1/config/beat.conf-tThread.exclusive is deprecated, use Thread::MutexSending Logstash logs to / home/elk/applications/logstash-7.5.1/logs which is now configured via log4j2.properties [2020-01-02T16:12:12309] [INFO] [logstash.setting.writabledirectory] Creating directory {: setting= > "path.queue" : path= > "/ home/elk/applications/logstash-7.5.1/data/queue"} [2020-01-02T16:12:12461] [INFO] [logstash.setting.writabledirectory] Creating directory {: setting= > "path.dead_letter_queue" : path= > "/ home/elk/applications/logstash-7.5.1/data/dead_letter_queue"} [2020-01-02T16:12:12890] [WARN] [logstash.config.source.multilocal] Ignoring the 'pipelines.yml' file because modules or command line options are specified [2020-01-02T16:12:14200] [INFO] [org.reflections.Reflections] Reflections took 41 ms to scan 1 urls Producing 20 keys and 40 values [2020-01-02T16:12:14673] [WARN] [logstash.outputs.elasticsearch] You are using a deprecated config setting "document_type" set in elasticsearch. Deprecated settings will continue to work, but are scheduled for removal from logstash in the future. Document types are being deprecated in Elasticsearch 6.0, and removed entirely in 7.0. You should avoid this feature If you have any questions about this, please visit the # logstash channel on freenode irc. {: name= > "document_type",: plugin= > "cloud", manage_template= > false, id= > "c864643340e20cc0970e7438081bbe9f8d9e69638b91d662640155c5e847f531", hosts= > [/ 172.16.240.19 plugin= 7008], document_type= > "log", enable_metric= > true, codec= > "plain_17e73b8f-bd1b-45ff-84ab-2a41f766a1a0", enable_metric= > true, charset= > "UTF-8" >, workers= > 1, template_name= > "logstash", template_overwrite= > false, doc_as_upsert= > false, script_type= > "inline", script_lang= > "painless", script_var_name= > event, scripted_upsert= > false, retry_initial_interval= > 2 Retry_max_interval= > 64, retry_on_conflict= > 1, ilm_enabled= > "auto", ilm_rollover_alias= > "logstash", ilm_pattern= > "{now/d}-000001", ilm_policy= > "logstash-policy", action= > "index", ssl_certificate_verification= > true, sniffing= > false, sniffing_delay= > 5, timeout= > 60, pool_max= > 1000, pool_max_per_route= > 1000, resurrect_delay= > 5, validate_after_inactivity= > 10000, http_compression= > false >} Configuration OK [2020-01-02T16:12:14709] [INFO] [logstash.runner] Using config.test_and_exit mode. Config Validation Result: OK. Exiting Logstash
Configuration OK indicates that the configuration file is successful, then start logstash and load beat.conf
. / logstash- f / home/elk/applications/logstash-7.5.1/config/beat.conf deployment filebeat
Install the filebeat component on the host where logs need to be collected
[root@iZbp14b5r2lytw5nc5z3w2Z filebeat] # pwd/usr/share/filebeat [root@iZbp14b5r2lytw5nc5z3w2Z filebeat] # ll-ltotal 23524vi filebeat-7.5.1-x86_64.rpm warning RW vi filebeat-7.5.1-x86_64.rpm warning-1 root root 24086235 Jan 2 16:19 filebeat-7.5.1-x86_ 64.rpm [root @ iZbp14b5r2lytw5nc5z3w2Z filebeat] # sudo rpm-vi filebeat-7.5.1-x86_64.rpm warning: filebeat-7.5.1-x86_64.rpm: Header V4 RSA/SHA512 Signature, key ID d88e42b4: NOKEYPreparing packages...filebeat-7.5.1-1.x86_64
Edit the configuration modification / etc/filebeat/filebeat.yml to set the connection information:
Filebeat.inputs:# Each-is an input. Most options can be set at the input level So# you can use different inputs for various configurations.# Below are the input specific configurations.- type: log enabled: true tags: ["clo***dmin"] paths:-/ logs/S*/clou***min/**/*.log # merge log information that is not beginning with a timestamp multiline.pattern:'^\ [[0-9] {4}-[0-9] {2}-[0-9] {2} 'multiline.negate: true multiline.match: after #-- Logstash output-- output.logstash: # The Logstash hosts hosts: ["172.16.240.19 after 7112"] index: joy#loud
Start the filebeat service
Cd / usr/share/filebeat/bin./filebeat-e-c / etc/filebeat/filebeat.yml configure kibana
Check to see if any logs have been collected after configuration.
You can see that there is already data coming up. Then look at the tags value and filter in the retrieval conditions to filter out the log information output from different services.
What is described above is a tutorial for ELKstack to build a micro-service log center, and the specific usage needs to be used by everyone through hands-on experiments. If you want to know more about it, welcome to follow the industry information channel!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.