In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-02-24 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)06/03 Report--
ELK + filebeat cluster deployment
Introduction to ELK
Elasticsearch
Elasticsearch is a real-time distributed search and analysis engine that allows you to explore your data at a speed and scale that you have never had before. It is used as full-text search, structured search, analysis and a combination of these three functions
2.Logstash
Logstash is a powerful data processing tool, it can achieve data transmission, format processing, format output, as well as powerful plug-in functions, often used for log processing.
3.Kibana
Kibana is an open source and free tool that provides a friendly Web interface for log analysis for Logstash and ElasticSearch to help you aggregate, analyze, and search important data logs.
Official website address: https://www.elastic.co/cn/downloads/
Note: the configuration file ip should be modified according to the actual situation.
Environment preparation, three Linux servers, unified system
Elk-node1 192.168.3.181 data, primary node (install elasticsearch, logstash, kabana, filebeat) elk-node2 192.168.3.182 data node (install elasticsearch, filebeat) elk-node3 192.168.3.183 data node (install elasticsearch, filebeat)
Modify hosts file. Every hosts is the same.
Vim / etc/hosts192.168.243.162 elk-node1192.168.243.163 elk-node2
Install jdk11, binary installation
If java is installed, skip this step
{{
Cd / home/tools & & wget https://download.java.net/java/GA/jdk11/13/GPL/openjdk-11.0.1_linux-x64_bin.tar.gz
Extract to the specified directory
Tar-xzvf jdk-11.0.4_linux-x64_bin.tar.gz-C / usr/local/jdk
Configure environment variables (set java environment)
JAVA_HOME=/usr/local/jdk/jdk-11.0.1CLASSPATH=$JAVA_HOME/lib/PATH=$PATH:$JAVA_HOME/binexport PATH JAVA_HOME CLASSPATH
Make the environment variable effective
Source / etc/profile
Yum source installation
Yum-y install javajava-version
}}
Modify system kernel parameters
Adjust the maximum virtual memory mapping space
Add the following at the end
Vim / etc/sysctl.confvm.max_map_count=262144
Add the following at the end
Vim / etc/security/limit.conf * soft nofile 1000000 * hard nofile 1000000 * soft nproc 1000000 * hard nproc 1000000 * soft memlock unlimited * hard memlock unlimitedsysctl-pcd / etc/security/limits.dvi 20-nproc.conf-# Default limit for number of user's processes to prevent-# accidental fork bombs.-# See rhbz # 432903 for reasoning.* soft nproc 4096root soft nproc Unlimited changed the * number to the user name esyonghu soft nproc 4096root soft nproc unlimited
Download the dependency package and install the repo source
Yum install-y yum-utils device-mapper-persistent-data lvm2 net-tools vim lrzsz tree screen lsof tcpdump wget ntpdatevim / etc/yum.repos.d/elastic.repo [elasticsearch-7.x] name=Elasticsearch repository for 7.x packagesbaseurl= https://artifacts.elastic.co/packages/7.x/yumgpgcheck=1gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearchenabled=1 autorefresh=1type=rpm-md [kibana-7.x] name=Kibana repository for 7.x packagesbaseurl= https://artifacts.elastic.co/packages/7.x/yumgpgcheck=1gpgkey=https: / / artifacts.elastic.co/GPG-KEY-elasticsearchenabled=1autorefresh=1type=rpm-mdyum repolist
# deploy elasticsearch cluster and operate on all nodes
Yum-y install elasticsearchgrep "^ [amurz]" / etc/elasticsearch/elasticsearch.ymlvim / etc/elasticsearch/elasticsearch.ymlcluster.name: my-elknode.name: elk-node1 (corresponding hostname) path.data: / var/lib/elasticsearchpath.logs: / var/log/elasticsearchtransport.tcp.compress: truenetwork.host: 0.0.0.0http.port: 9200transport.tcp.port: 9300 # # configure discovery.seed_hosts on other nodes only: ["192.168.243.162" "192.168.243.163", "192.168.243.164"] cluster.initial_master_nodes: ["192.168.243.162", "192.168.243.163", "192.168.243.164"] discovery.zen.minimum_master_nodes: 2 # prevent cluster "brain fissure" The minimum number of primary nodes in the cluster needs to be configured, usually (number of primary nodes / 2) + 1node.master: truenode.data: truexpack.security.enabled: truehttp.cors.enabled: true # # http.cors.allow-origin: "*" # # Cross-domain access Support for head plug-ins can be accessed at esxpack.security.transport.ssl.enabled: truexpack.security.transport.ssl.verification_mode: certificatexpack.security.transport.ssl.keystore.path: / etc/elasticsearch/elastic-certificates.p12xpack.security.transport.ssl.truststore.path: / etc/elasticsearch/elastic-certificates.p12
Elasticesearch consumes a lot of cpu in actual production, so you need to increase the JVM memory of the initial application. The default is 1G, which is adjusted according to the actual situation.
Vim / etc/elasticsearch/jvm.options # modify these two lines-Xms4g # sets the value of the minimum heap to 4G-Xmx4g # sets the value of the group big heap to 4G
Configure TLS and authentication-this step can also be skipped for security
{{
Configure TLS.cd / usr/share/elasticsearch/./bin/elasticsearch-certutil ca # # on the Elasticsearch master node to always use the enter key. / bin/elasticsearch-certutil cert-- ca elastic-stack-ca.p12 ll-rw- 1 root root 3443 Jun 28 16:46 elastic-certificates.p12-rw- 1 root root 2527 Jun 28 16:43 elastic-stack-ca.p12# add elasticsearch group rights to production files Limited to chgrp elasticsearch/ usr/share/elasticsearch/elastic-certificates.p12 / usr/share/elasticsearch/elastic-stack-ca.p12# to assign 640 permissions to these two files chmod 640 / usr/share/elasticsearch/elastic-certificates.p12 / usr/share/elasticsearch/elastic-stack-ca.p12# move these two files to the elasticsearch configuration folder mv / usr/share/elasticsearch/elastic-* / etc/elasticsearch/
Copy the tls authentication file to the node configuration folder
Scp / etc/elasticsearch/elastic-certificates.p12 root@192.168.243.163:/etc/elasticsearch/scp / etc/elasticsearch/elastic-stack-ca.p12 root@192.168.243.163:/etc/elasticsearch/
}}
Start the service to verify the cluster
Start the primary node cluster first, and then start the other nodes
Systemctl start elasticsearch
Set password-uniformly set password to 123456
/ usr/share/elasticsearch/bin/elasticsearch-setup-passwords interactive
Verify cluster-# browser access
Http://192.168.243.163:9200/_cluster/health?pretty
Return as follows
{"cluster_name": "my-elk", "status": "green", "timed_out": false, "number_of_nodes": 3 number of # nodes "number_of_data_nodes": 3 number of data nodes "active_primary_shards": 4, "active_shards": 8 "relocating_shards": 0, "initializing_shards": 0, "unassigned_shards": 0, "delayed_unassigned_shards": 0, "number_of_pending_tasks": 0, "number_of_in_flight_fetch": 0, "task_max_waiting_in_queue_millis": 0 "active_shards_percent_as_number": 100.0}
# deploy kibana
Yum source installation # # install on any node
Yum-y install kibana
Modify kibana configuration file
Vim / etc/kibana/kibana.ymlserver.port: 5601server.host: "0.0.0.0" server.name: "elk-node2" elasticsearch.hosts: ["http://192.168.243.162:9200","http://192.168.243.163:9200"]elasticsearch.username:" elastic "elasticsearch.password:" 123456 "i18n.locale:" en "
Start the service
Systemctl start kibana
Browsers access http://192.168.243.162:5601/
Install logstash
Deploy on the primary node
Yum-y install logstash # # yum Source installation # # binary installation wget https://artifacts.elastic.co/downloads/logstash/logstash-7.4.1.tar.gztar-zvxf logstash-7.4.1.tar.gz-C / home/elkmkdir-p / data/logstash/ {logs,data}
Modify the configuration file
Vim / etc/logstash/logstash.conf egrep "# | ^ $" / etc/logstash/conf.d/logstash_debug.confinput {beats {port = > 5044}} filter {grok {match = > {"message" = > "(?
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.