Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

ELK5.3+Kafka cluster configuration

2025-04-05 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)06/03 Report--

[1] Resource preparation

# 3 4C*8G, install Zookeeper, Kafka, Logstash--Broker (input: filebeat; output: Kafka)

10.101.2.23 10.101.2.24 10.101.2.25

# 2 4C*8G, install Logstash--Indexer (input: Kafaka; output: Elasticsearch)

10.101.2.26 10.101.2.27

# 3 8C*16G, install Elasticsearch

10.101.2.28 10.101.2.29 10.101.2.30

# 2 2C*4G, install Kibana

10.101.2.31 10.101.2.32

# download the installation package

Elasticsearch-5.3.1.tar.gz

Filebeat-5.3.1-linux-x86_64.tar.gz

Jdk-8u131-linux-x64.tar.gz

Kafka_2.12-0.10.2.0.tgz

Kibana-5.3.1-linux-x86_64.tar.gz

Logstash-5.3.1.tar.gz

Node-v7.9.0-linux-x64.tar.gz

Zookeeper-3.4.10.tar.gz

Nginx-1.12.0.tar.gz

Upload it to the server / usr/local/src directory

[2] General configuration

# configure hosts

Vi / etc/hosts

10.101.2.23 vmserver2x23

10.101.2.24 vmserver2x24

10.101.2.25 vmserver2x25

10.101.2.26 vmserver2x26

10.101.2.27 vmserver2x27

10.101.2.28 vmserver2x28

10.101.2.29 vmserver2x29

10.101.2.30 vmserver2x30

10.101.2.31 vmserver2x31

10.101.2.32 vmserver2x32

# configure ssh access restrictions if necessary

Vi / etc/hosts.allow

[3] install Elasticsearch cluster

# system environment

Vi / etc/sysctl.conf

Vm.max_map_count=262144

Fs.file-max=65536

Execute sysctl-p to make the configuration effective

Vi / etc/security/limits.conf # number of open files

* soft nofile 65536

* hard nofile 131072

* soft nproc 2048

* hard nproc 4096

*-memlock unlimited

Vi / etc/security/limits.d/90-nproc.conf

* soft nproc 2048

# configure Java environment variables

Cd / usr/local/src/

Tar-xvf jdk-8u131-linux-x64.tar.gz

Mv jdk1.8.0_131 / usr/share/

Vi / etc/profile # save and exit after adding the following three lines at the end

Export JAVA_HOME=/usr/share/jdk1.8.0_131

Export PATH=$JAVA_HOME/bin:$PATH

Export CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar

Source / etc/profile makes the configuration effective immediately

# decompress elasticsearch

Cd / usr/local/src

Tar-xvf elasticsearch-5.3.1.tar.gz

Mv elasticsearch-5.3.1 / usr/local

Vi / usr/local/elasticsearch-5.3.1/bin/elasticsearch # set ES_JAVA_OPTS parameters

ES_JAVA_OPTS= "- Xms8g-Xmx8g" # after removing the comment before the line, be sure to delete the following string. / bin/elasticsearch

# add elastic groups and users because elasticsearch does not allow root users to start

Groupadd elastic

Useradd elastic-g elastic

Passwd elastic # set user password

Chown-R elastic:elastic / usr/local/elasticsearch-5.3.1/

# configure elasticsearch.ywl. The main parameters are as follows

Cluster.name: bsd-elk

Node.name: elk-2-30 # each node is different

Node.master: true

Node.data: true

Bootstrap.memory_lock: true

Bootstrap.system_call_filter: this parameter needs to be set to false in the following versions of false # centos7

Network.host: 0.0.0.0

Http.port: 9200

Discovery.zen.ping.unicast.hosts: ["10.101.2.28 9300", "10.101.2.29 VR 9300", "10.101.2.30 VR 9300"]

Discovery.zen.minimum_master_nodes: 2

Discovery.zen.ping_timeout: 60s # most articles on the Internet are written as discovery.zen.ping.timeout.

Http.cors.enabled: true

Http.cors.allow-origin: "*"

# download node-v7.9.0-linux-x64.tar.gz, decompress and mv to / usr/local/nodejs-7.9.0

Chown-R elastic:elastic nodejs-7.9.0/

Cd / usr/local/nodejs-7.9.0

Ln-s / usr/local/nodejs-7.9.0/bin/node / usr/local/bin

Ln-s / usr/local/nodejs-7.9.0/bin/npm / usr/local/bin

# install the head plug-in. Elasticsearch above 5.x has not found a way to install it offline, so the server needs to activate public network access.

The # 5.x version is a landmark update, and most of the articles online are installed from previous versions of plug-ins.

Cd / usr/local/elasticsearch-5.3.1

Git clone https://github.com/mobz/elasticsearch-head.git

If you do not have the git tool to install first, yum install git

Cd elasticsearch-head

Npm install-g grunt-- registry= https://registry.npm.taobao.org # install grunt

Npm install # install head

Npm install grunt-- save # if there is no grunt file in the node_modules/grunt/bin/ directory, execute the following command

Vi Gruntfile.js modifies connect to add native IP hostname: '10.101.2.30' to options

Cd / usr/local/elasticsearch-5.3.1

Bin/elasticsearch-d # starts elasticsearch

Cd elasticsearch-head

Node_modules/grunt/bin/grunt server & # launch the head plug-in

Visit http://10.101.2.30:9100

# install the bigdesk plug-in

Cd / usr/local/elasticsearch-5.3.1

Git clone https://github.com/hlstudio/bigdesk

Cd bigdesk/_site

Python-m SimpleHTTPServer & # launch the bigdesk plug-in

Visit http://10.101.2.30:8000

The other two machines (10.101.2.28 10.101.2.29) are also configured according to this step. The selection of master and data nodes depends on the actual situation. My three machines are all mixed nodes.

After all elasticsearch is started, if you can see the cluster information of 3 nodes, you can visit head.

[IV] install ZooKeeper cluster

# zookeeper depends on java. Refer to the above for java environment configuration.

# decompress zookeeper-3.4.10.tar.gz

Cd / usr/local/src

Tar-xvf zookeeper-3.4.10.tar.gz

Mv zookeeper-3.4.10 / usr/local

Mkdir / usr/local/zookeeper-3.4.10/data # create a data storage directory on each node

# create myid file

The values of myid on echo 23 > / usr/local/zookeeper-3.4.10/data/myid # 10.101.2.23, 24 and 25 are 23 24 25 respectively.

# configure zoo.cfg

Cd / usr/local/zookeeper-3.4.10/conf/

Cp zoo_sample.cfg zoo.cfg

The main parameters of vi zoo.cfg # are as follows

TickTime=2000

InitLimit=10

SyncLimit=5

DataDir=/usr/local/zookeeper-3.4.10/data

ClientPort=2181

Server.23=10.101.2.23:2888:3888

Server.24=10.101.2.24:2888:3888

Server.25=10.101.2.25:2888:3888

# copy configuration files to other nodes

Scp zoo.cfg root@ip:/usr/local/zookeeper-3.4.10/conf/

# start zookeeper cluster

Cd / usr/local/zookeeper-3.4.10/

Bin/zkServer.sh start

Bin/zkServer.sh status # master node returns Mode: leader, slave node returns Mode: follower

At this point, the zookeeper cluster has been configured.

[5] configure kafka cluster

# decompress kafka_2.12-0.10.2.0.tgz and create a data directory

Cd / usr/local

Tar-xvf src/kafka_2.12-0.10.2.0.tgz

Mkdir / usr/local/kafka_2.12-0.10.2.0/data

# configure server.propertites

Cd / usr/local/kafka_2.12-0.10.2.0/config

The main parameters of vi server.properties # are as follows

The values of id on broker.id=23 # 10.101.2.23,24,25 are 23 24 25 respectively.

Delete.topic.enable=true

Num.network.threads=3

Num.io.threads=8

Socket.send.buffer.bytes=102400

Socket.receive.buffer.bytes=102400

Socket.request.max.bytes=104857600

Log.dirs=/usr/local/kafka_2.12-0.10.2.0/data

Num.partitions=6

Num.recovery.threads.per.data.dir=1

# log.flush.interval.messages=10000

# log.flush.interval.ms=1000

Log.retention.hours=72

# log.retention.bytes=1073741824

Log.segment.bytes=1073741824

Log.retention.check.interval.ms=300000

Zookeeper.connect=10.101.2.23:2181,10.101.2.24:2181,10.101.2.25:2181

Zookeeper.connection.timeout.ms=6000

# copy the configuration file to another node, and don't forget to modify broker.id

Scp server.properties root@ip:/usr/local/kafka_2.12-0.10.2.0/config/

# start kafka cluster

Cd / usr/local/kafka_2.12-0.10.2.0 /

Bin/kafka-server-start.sh config/server.properties > / dev/null &

# Friendship presents a few common commands

Bin/kafka-topics.sh-create-zookeeper localhost:2181-replication-factor 1-partitions 1-topic test # create topic

Bin/kafka-topics.sh-- list-- zookeeper localhost:2181 # View the list of topic that has been created

Bin/kafka-topics.sh-- describe-- zookeeper localhost:2181-- topic test # View topic details

Bin/kafka-console-producer.sh-- broker-list localhost:9092-- topic test # send a message. Enter the message and simulate it.

Bin/kafka-console-consumer.sh-- bootstrap-server localhost:9092-- topic test # consumption messages, which can be transferred to other kafka nodes to receive messages sent by production nodes synchronously

Bin/kafka-topics.sh-- zookeeper localhost:2181-- alter-- topic test-- partitions 6 # add partition to topic

Bin/kafka-topics.sh-- delete-- zookeeper localhost:2181-- topic test1 # Delete the created topic, provided that the delete.topic.enable=true parameter is enabled

If you can't delete it, you can kill it in zookeeper.

Cd / usr/local/zookeeper-3.4.10/

Bin/zkCli.sh

Ls / brokers/topics # View topic

Rmr / brokers/topics/test1 # Delete topic

At this point, the kafka cluster has been configured.

[VI] logstash--broker cluster configuration

# Overview of java environment configuration

# decompress logstash-5.3.1.tar.gz

Cd / usr/local

Tar-xvf src/logstash-5.3.1.tar.gz

# add profile beat_to_kafka.conf

Cd logstash-5.3.1

Vi config/beat_to_kafka.conf # enter the following and save

Input {

Beats {

Port = > 5044

}

}

Filter {

}

# topic_id is changed to output to different topic according to the document_type configured in beat for kibana packet filtering

Output {

Kafka {

Bootstrap_servers = > "10.101.2.23 9092 Magazine 10.101.2.24 Fraser 9092 10.101.2.25 Fringe 9092"

# topic_id = > "bsd-log"

Topic_id = >'% {[type]}'

}

}

# start logstash

Bin/logstash-f config/beat_to_kafka.conf > / dev/null &

At this point, the logstash-broker cluster configuration is complete

[7] install filebeat on the actual application server

# decompress filebeat-5.3.1-linux-x86_64.tar.gz

Cd / usr/local/

Tar-xvf src/filebeat-5.3.1-linux-x86_64.tar.gz

Mv filebeat-5.3.1-linux-x86_64 filebeat-5.3.1

# configure the filebeat.yml file, first find a drds to test the water

Cd filebeat-5.3.1

The main parameters of vi filebeat.yml # are as follows

# = start of file body = =

Filebeat.prospectors:

-

Input_type: log

Paths:

-/ home/admin/drds-server/3306/logs/rms/slow.log

-/ home/admin/drds-server/3306/logs/engineering/slow.log

-/ home/admin/drds-server/3306/logs/sc_file/slow.log

-/ home/admin/drds-server/3306/logs/sc_user/slow.log

-/ home/admin/drds-server/3306/logs/sc_order/slow.log

-/ home/admin/drds-server/3306/logs/sc_inventory/slow.log

-/ home/admin/drds-server/3306/logs/sc_marketing/slow.log

-/ home/admin/drds-server/3306/logs/sc_message/slow.log

-/ home/admin/drds-server/3306/logs/sc_channel/slow.log

# exclude_lines: ["^ DBG"]

# include_lines: ['Exception','ERR_CODE']

# exclude_files: [".gz $"]

Document_type: drds-slow

# set merge rules

Multiline.pattern: ^ [0-9] {4}-[0-9] {2}-[0-9] {2}: [0-9] {2}: [0-9] {2}: [0-9] {3}

Multiline.negate: true

Multiline.match: after

# configure different document_type on a single machine

-

Input_type: log

Paths:

-/ home/admin/drds-server/3306/logs/test/sql.log

Document_type: drds-sql

Multiline.pattern: ^ [0-9] {4}-[0-9] {2}-[0-9] {2}: [0-9] {2}: [0-9] {2}: [0-9] {3}

Multiline.negate: true

Multiline.match: after

#-Logstash output--

Output.logstash:

# The Logstash hosts

Hosts: ["10.101.2.23 5044", "10.101.2.24 VR 5044", "10.101.2.25 5044"]

# = end of file body =

# start filebeat

. / filebeat-c filebeat.yml > / dev/null &

[8] configure logstash--indexer cluster

# Overview of java environment configuration

# decompress logstash-5.3.1.tar.gz

Cd / usr/local

Tar-xvf src/logstash-5.3.1.tar.gz

# add profile kafka_to_es.conf

Cd logstash-5.3.1

Vi config/kafka_to_es.conf # enter the following and save

The configuration of server and topic in # input is different from the previous version 5.x

Input {

Kafka {

Bootstrap_servers = > "10.101.2.23 9092 Magazine 10.101.2.24 Fraser 9092 10.101.2.25 Fringe 9092"

Group_id = > "logstash"

Topics = > ["drds-sql", "drds-slow", "sc_user", "sc_channel", "sc_order", "sc_inventory", "sc_message", "sc_file", "sc_marketing", "rms", 'scm','engineering']

Consumer_threads = > 50

Decorate_events = > true

}

}

Filter {

}

Output {

Elasticsearch {

Hosts = > ["10.101.2.28 9200", "10.101.2.29 VR 9200", "10.101.2.30 VR 9200"]

Index = > "logstash-% {+ YYYY.MM.dd.hh}"

Manage_template = > true

Template_overwrite = > true

Template_name = > "drdsLogstash"

Flush_size = > 50000

Idle_flush_time = > 10

}

}

# start logstash

Bin/logstash-f config/kafka_to_es.conf > / dev/null &

At this point, the configuration of the logstash-indexer cluster is complete. Not surprisingly, there should be data written in the elasticsearch-head.

[IX] configure kibana cluster

# decompress kibana-5.3.1-linux-x86_64.tar.gz

Cd / usr/local

Tar-xvf src/kibana-5.3.1-linux-x86_64.tar.gz

Mv kibana-5.3.1-linux-x86_64/ kibana-5.3.1

# configure kibana.yml file

Cd kibana-5.3.1

The main parameters of vi config/kibana.yml # are as follows

Server.port: 5601

Server.host: "0.0.0.0"

Elasticsearch.url: "http://10.101.2.28:9200" # refers to the es cluster master node

# start kibana

Bin/kibana > / dev/null &

# kibana is not compatible with browsers. Lower versions of chrome and ie cannot be accessed. It shows that it is being loaded.

Visit http://ip:5601

# another kibana node has the same configuration (you can point es.url to another node). Kibana query supports Boolean operators, wildcards, etc., with larger keywords (such as AND OR) and Baidu.

[X] configure nginx proxy

# install some dependent packages required by nginx

Yum-y install pcre-devel

Yum-y install gd-devel

# decompress nginx-1.12.0.tar.gz

Cd / usr/local/

Tar-xvf src/nginx-1.12.0.tar.gz

# install nginx

Cd nginx-1.12.0

. / configure-prefix=/usr/local/nginx-1.12.0/-conf-path=/usr/local/nginx-1.12.0/nginx.conf

Make

Make install

# configure the nginx.conf file. We only do load balancing here. Just set it as you like.

Vi / usr/local/nginx-1.12.0/nginx.conf

Worker_processes 1

Error_log logs/error.log info

# pid logs/nginx.pid

Events {

Worker_connections 1024

}

Http {

Include mime.types

Default_type application/octet-stream

Log_format main'$remote_addr-$remote_user [$time_local] "$request"'

'$status $body_bytes_sent "$http_referer"'

'"$http_user_agent"$http_x_forwarded_for"'

Access_log logs/access.log main

Sendfile on

Keepalive_timeout 65

Upstream kibana {

Server 10.101.2.31:5601

Server 10.101.2.32:5601

}

Server {

Listen 15601

Server_name 10.101.2.31

# charset koi8-r

# access_log logs/host.access.log main

Location / {

Root html

Index index.html index.htm

Proxy_pass http://kibana;

}

Error_page 500 502 503 504 / 50x.html

Location = / 50x.html {

Root html

}

}

# start nginx

Sbin/nginx

# and then access http://nginx_ip:15601 on the browser

At this point, all the components of the cluster have been configured.

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report