In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-02-24 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)06/03 Report--
Architectural goal
Description
System: CentOS Linux release 7.5.1804
ELK version: filebeat-6.8.5-x86_64.rpm, logstash-6.8.5.rpm, elasticsearch-6.8.5.rpm, kibana-6.8.5-x86_64.rpm kafka_2.11-2.0.0 zookeeper-3.4.12
Address name function, click left to right 192.168.9.133test1.xiong.comnginx + virtual host + filebeat192.168.9.134test2.xiong.comnginx + virtual host + filebeat192.168.9.135test3.xiong.comelasticsearch + kibana + logstash192.168.9.136test4.xiong.comelasticsearch + kibana + logstash192.168.9.137test5.xiong.comredis + logstash (kafka is used here) 192.168.9.138test6.xiong.comredis + logstash (here kafka is used)
Practice does not require so much preparation of 4 sets.
1. Configuration 1.1, Hostname ~] # cat / etc/hosts192.168.9.133 test1.xiong.com192.168.9.134 test2.xiong.com192.168.9.135 test3.xiong.com192.168.9.136 test4.xiong.com192.168.9.137 test5.xiong.com192.168.9.138 test6.xiong.com# when the firewall is turned off and selinuxsystemctl stop firewalld sed-I'/ SELINUX/s/enforcing/disabled/' / etc/selinux/config~] # crontab-l Synchronize * / 1 * / usr/sbin/ntpdate pool.ntp.org & > / dev/null # install jdk 135 136, 137 138need to install ~] # tar xf jdk-8u181-linux-x64.tar.gz-C / usr/java/ cd / usr/java/ ln-sv jdk1.8.0_181/ default ln-sv default/ jdk# sets the number of open files echo "* hard nofile 65536" > > / etc/security/limits.confecho "* soft nofile 65536" > > / etc/security/limits.confjava] # cat / etc/profile.d/java.sh export JAVA_HOME=/usr / java/jdk export PATH=$JAVA_HOME/bin:$PATHjava] # source / etc/profile.d/java.shjava] # java-versionjava version "1.8.0mm 181" Java (TM) SE Runtime Environment (build 1.8.0_181-b13) Java HotSpot (TM) 64-Bit Server VM (build 25.181-b13 Mixed mode) 1.2. server-side installation of elk
Hosts 9.135, 9.136 are configured here
# install server-side ELK~] # rpm-vih elasticsearch-6.8.5.rpm kibana-6.8.5-x86_64.rpm logstash-6.8.5.rpm # modify configuration ~] # cd / etc/elasticsearch # synchronize after modification Just modify Network.host\ node.nameelasticsearch] # grep-v "^ #" elasticsearch.ymlcluster.name: myElks # Cluster name node.name: test3.xiong.com # modify hostname path.data: / opt/elasticsearch/data # data directory path.logs: / opt/elasticsearch/logs # log directory network.host: 0.0.0.0network.publish_host: 192.168.9.136 # listening address # Discovery address pingdiscovery.zen.ping.unicast.hosts: ["192.168.9.135" "192.168.9.136"] # calculate the minimum number of nodes required (NCP2) + 1discovery.zen.minimum_master_nodes: enable cross-domain access support http.cors.enabled: true http.cors.allow-origin: "*" # modify the data directory and log elasticsearch] # mkdir / opt/elasticsearch/ {data Logs}-pvelasticsearch] # chown elasticsearch.elasticsearch / opt/elasticsearch/-R# modify startup file elasticsearch] # vim / usr/lib/systemd/system/elasticsearch.service# add environment variable Environment=JAVA_HOME=/usr/java/jdk # specify java home directory LimitMEMELOCK=infinity # maximize memory elasticsearch] # vim jvm.options # modify startup jvm memory under [Service] It should be noted that both hosts need the same configuration to start the service for half of the memory or no more than 30G-Xms2g-Xmx2g#. You can use tools such as ansible to check whether the service port listens successfully. Or check systemctl status elasticsearchelasticsearch] # ss-tnl | grep 92LISTEN 0128:: ffff:192.168.9.136:9200: * LISTEN 0128:: ffff:192.168.9.136:9300: * # check whether the host joins the cluster elasticsearch] # curl 192.168.9.135:9200/_cat/nodes192. 168.9.136 7 95 1 0.00 0.06 0.11 mdi * test4.xiong.com192.168.9.135 7 97 20 0.45 0.14 0.09 mdi-test3.xiong.com# View masterelasticsearch] # curl 192.168.9.135:9200/_cat/masterfVkp7Ld3RDGmWlGpm6t7kg 192.168.9.136 192.168.9.136 test4.xiong.com1.2.1, Install plug-in head# two hosts 9.135 9.136 install 1, install nmp] # yum-y install epel-release # need to install epel source] # yum-y install npm2, Install the elasticsearch-head plug-in] # cd / usr/local/src/] # git clone git://github.com/mobz/elasticsearch-head.git] # cd / usr/local/src/elasticsearch-head/ elasticsearch-head] # npm install grunt-save # generate execution file elasticsearch-head] # ll node_modules/grunt # determine whether the file generates elasticsearch-head] # npm install3, Start headnode_modules] # nohup npm run start & ss-tnl | grep 9100 # to see if the port exists After existence, you can directly access web9.135:9100 and 9.136 rig 9100 with only one.
1.2.2. Configure kibanakibana] # grep-v "^ #" kibana.yml | grep-v "^ $" server.port: 5601server.host: "0.0.0.0" server.name: "test3.xiong.com" # the other one only needs to modify the hostname elasticsearch.hosts: ["http://192.168.9.135:9200"," "http://192.168.9.135:9200"]kibana]# systemctl restart kibanakibana] # ss-tnl | grep 5601 # check whether the port is listening to LISTEN 0 128 *: 5601 *: *
1.2.3. Configure logstashlogstash] # vim / etc/default/logstash JAVA_HOME= "/ usr/java/jdk" # add java environment variable 1.3, nginx+filebeat
Host: 192.168.9.133, 9.134
1.3.1 、 Install ~] # cat / etc/yum.repos.d/nginx.repo # configure nginx yum Source [nginx-stable] name=nginx stable repobaseurl= http://nginx.org/packages/centos/$releasever/$basearch/gpgcheck=1enabled=1gpgkey=https://nginx.org/keys/nginx_signing.keymodule_hotfixes=true[nginx-mainline]name=nginx mainline repobaseurl= http://nginx.org/packages/mainline/centos/$releasever/$basearch/gpgcheck=1enabled=0gpgkey=https://nginx.org/keys/nginx_signing.keymodule_hotfixes=true~]# yum-y install nginx~] # rpm-vih Filebeat-6.8.5-x86_64.rpm 1.3.2 、 Modify log to json format] # vim / etc/nginx/nginx.confhttp {# add log format Log_format log_format access_json'{"@ timestamp": "$time_iso8601",''"host": "$server_addr",''"clientip": "$remote_addr",''"size": $body_bytes_sent,''"responsetime": $request_time,''"upstreamtime": "$upstream_response_time",''"upstreamhost": "$upstream_addr" "http_host": "$host", "url": "$uri", "domain": "$host",''"xff": "$http_x_forwarded_for",''"referer": "$http_referer",''"status": "$status"}' } server {# use access_log / var/log/nginx/default_access.log access_json; in server segment ~] # vim / etc/nginx/nginx.conf both need to be added in the # http section, and the other one should be backed up. Upstream kibana {server 192.168.9.135access_log 5601 max_fails=3 fail_timeout=30s; server 192.168.9.136 backup } ~] # vim / etc/nginx/conf.d/two.conf server {listen 5601; server_name 192.168.9.133; # Note to modify the host address access_log / var/log/nginx/kinaba_access.log access_json; location / {proxy_pass http://kibana;}} 1.4, logstash+kafka
Host: 192.168.9.137, 9.138
1.4.1. Install kafka1, install jdk version 1.82, install kafka and zookeeper Note: install two machines except listening address Other consistent mv kafka_2.11-2.0.0 / zookeeper-3.4.12/ / opt/hadoop/ cd / opt/hadoop/ ln-sv kafka_2.11-2.0.0 / kafka ln-sv zookeeper-3.4.12/ zookeeper cd / opt/hadoop/kafka/config vim server.properties # modify the listening address listeners=PLAINTEXT://192.168.9.138:9092 log. Dirs=/opt/logs/kafka_logs vim zookeeper.properties dataDir=/opt/logs/zookeeper copies / opt/hadoop/zookeeper/conf/zoo_sample.cfg to zoo.cfg vim / opt/hadoop/zookeeper/conf/zoo.cfg dataDir=/opt/logs/zookeeperDataDir mkdir / opt/logs/ {zookeeper Kafka_logs,zookeeperDataDir}-pv chmod + x / opt/hadoop/zookeeper/bin/*.sh chmod + x / opt/hadoop/kafka/bin/*.sh 3, self-booting cat kafka.service [Unit] Description=kafka 9092 # defines that kafka.server should start After=zookeeper.service # strong dependency after zookeeper Zookeeper must start Requires=zookeeper.service [Service] Type=simple Environment=JAVA_HOME=/usr/java/default Environment=KAFKA_PATH=/opt/hadoop/kafka:/opt/hadoop/kafka/bin ExecStart=/opt/hadoop/kafka/bin/kafka-server-start.sh / opt/hadoop/kafka/config/server.properties ExecStop=/opt/hadoop/kafka/bin/kafka-server-stop.sh Restart=always [Install] WantedBy=multi-user.targetcat zookeeper.service [Unit] first Description=Zookeeper Service After=network.target ConditionPathExists=/opt/hadoop/zookeeper/conf/zoo.cfg [Service] Type=forking Environment=JAVA_HOME=/usr/java/default ExecStart=/opt/hadoop/zookeeper/bin/zkServer.sh start ExecStop=/opt/hadoop/zookeeper/bin/zkServer.sh stop Restart=always [Install] WantedBy=multi-user.target4 、 Start mv kafka.service zookeeper.service / usr/lib/systemd/systemsystemctl restart zookeeper kafkasystemctl status zookeepersystemctl status kafkass-tnlLISTEN 0 50:: ffff:192.168.9.138:9092: * LISTEN 0 50: 2181:: * LISTEN 0 50:: ffff:192.168.9.137:9092: * 1.4.2, install logstash1, Install logstash rpm- ivh logstash-6.8.5.rpm # or direct yum installation\ create repo repository] # vim / etc/yum.repos.d/logstash.repo [logstash-6.x] name=Elastic repository for 6.x packages baseurl= https://artifacts.elastic.co/packages/6.x/yum gpgcheck=1 gpgkey= https://artifacts.elastic.co/GPG-KEY-elasticsearch enabled=1 autorefresh=1 type=rpm-md2, Configure logstash startup file sed-I "1a\ JAVA_HOME=" / usr/java/jdk "/ etc/default/logstash 2, log collection 2.1, Configure nginx-filebeat# to view the filebeat configuration address on nginx: 192.168.9.133 ~] # grep-v "#" / etc/filebeat/filebeat.yml | grep-v "^ $" filebeat.inputs:- type: log enabled: true paths:-/ var/log/nginx/kinaba_access.log # Note that this file needs to give 755 permissions exclude_lines: ['^ DBG'] exclude_files: ['.gz $'] fields : type: kinaba-access-9133 ip: 192.168.9.133filebeat.config.modules: path: ${path.config} / modules.d/*.yml reload.enabled: falsesetup.template.settings: index.number_of_shards: 3output.logstash: hosts: ["192.168.9.137 192.168.9.133filebeat.config.modules 5044"] worker: 2 # start two worker threads 2.2, Configure logstash~] # cat / etc/logstash/conf.d/nginx-filebeats.conf input {beats {port = > 5044 codec = > "json"} output {# stdout {# get into the habit of printing rubydebug output to the screen first Then add kafka # codec = > "rubydebug" #} kafka {bootstrap_servers = > "192.168.9.137 rubydebug 9092" codec = > "json" topic_id = > "logstash-kinaba-nginx-access"} # screen output: / usr/share/logstash/bin/logstash-f nginx-filebeats.conf# check: / usr/share/logstash/bin/ Logstash- f nginx-filebeats.conf-t # restart logstash # View Log: tailf / var/log/logstash/logstash-plain.log# View topic ~] # / opt/hadoop/kafka/bin/kafka-topics.sh-- list-- zookeeper 192.168.9.137mot2181 logstash-kinaba-nginx-access# View topic content ~] # / opt/hadoop/kafka/bin/kafka-console-consumer.sh-- bootstrap-server 192.168.9 .137 host 9092-- topic logstash-kinaba-nginx-access-- from-beginning {"host": {"architecture": "x86 through 64" Containerized: false, "os": {"version": "7 (Core)", "codename": "Core", "platform": "centos", "family": "redhat", "name": "CentOS Linux"}, "name": "test1.xiong.com", "id": "e70c4e18a6f243c69211533f14283599"}, "@ timestamp": "2019-12-27T02:06:17.326Z", "log": {"file": {"path": "/ var/log/nginx/kinaba_access.log"}} "fields": {"type": "kinaba-access-9133", "ip": "192.168.9.133"}, "message": "{\" @ timestamp\ ":\:\" -\ ",\" referer\ ":\" http://192.168.9.133:5601/app/timelion\",\"status\":\"304\"}","source":"/var/log/nginx/kinaba_access.log","@version":"1","offset":83382, "beat": {"version": "6.8.5", "hostname": "test1.xiong.com", "name": "test1.xiong.com"}, "prospector": {"type": "log"}, "input": {"type": "log"} "tags": ["beats_input_codec_plain_applied"]} 2.3, Logstash# host on ELK: 192.168.9.135] # cat / etc/logstash/conf.d/logstash-kinaba-nginx.conf input {kafka {bootstrap_servers = > "192.168.9.137 etc/logstash/conf.d/logstash-kinaba-nginx.conf input 9092" decorate_events = > true consumer_threads = > 2 topics = > "logstash-kinaba-nginx-access" auto_offset_reset = > "latest"} output {# stdout {# form a good habit Every time, you must print # codec = > "rubydebug" #} if [fields] [type] = = "kinaba-access-9133" {elasticsearch {hosts = > ["192.168.9.135 logstash-kinaba-access-% 9200"] codec = > "json" index = > "logstash-kinaba-access-% {+ YYYY.MM.dd}"} # screen output: / usr/share/logstash/bin/logstash-f logstash-kinaba-nginx.conf # Check: / usr/share/logstash/bin/logstash-f logstash-kinaba-nginx.conf-t # View log: tailf / var/log/logstash/logstash-plain.log# restart logstash# wait a moment Visit web several times, and then check the index ~] # curl http://192.168.9.135:9200/_cat/indicesgreen open logstash-kinaba-access-2019.12.27 AcCjLtCPTryt6DZkl5KbPw 5 1 100 0 327.7kb 131.8kb
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.