In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-19 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >
Share
Shulou(Shulou.com)06/03 Report--
Operational Elastic Stack-ELK deployment and configuration system environment: CentOS Linux release 7.5.1804 (Core) number of servers: 8
Related software and version
Jdk-8u181-linux-x64.tar.gz
Elasticsearch-6.3.2.tar.gz
Zookeeper-3.4.12.tar.gz
Kafka_2.10-0.10.0.1.tgz
Filebeat-6.3.2-linux-x86_64.tar.gz
Kibana-6.3.2-linux-x86_64.tar.gz
Logstash-6.3.2.tar.gz
Agreement
All installation packages are placed in / elk/app
All software is installed in the home directory of / usr/local
Installation of JDK
Version: jdk1.8.0_181
Servers involved (c172 to c178)
Here, take the installation of JDK in C172 as an example
Decompression
Tar zxvf jdk-8u181-linux-x64.tar.gz-C / usr/local/
Add a global environment variable
Vim / etc/profile
Export JAVA_HOME=/usr/local/jdk1.8.0_181
Export PATH=$JAVA_HOME/bin:$PATH
Export CLASSPATH=.:$JAVA_HOME/lib/tools.jar:$JAVA_HOME/lib/dt.jar:$CLASSPATH
Variable effective
Source / etc/profile
Verify the JAVA version
Java-version
Follow the same steps to complete the installation of server JAVA from c173 to c178
Installation and configuration of ElasticSearch
Version: elasticsearch-6.3.2
Servers c176 to c178 (three)
Here, take c176 as an example
Decompression
Tar zxvf elasticsearch-6.3.2.tar.gz-C / usr/local/
Rename
Mv elasticsearch-6.3.2 elasticsearch
Create a stand-alone es user with the name elasticsearch and authorize it to / usr/local/elasticsearch
Useradd elasticsearch
Chown-R elasticsearch:elasticsearch / usr/local/elasticsearch
System tuning for elasticsearch
Vim / etc/sysctl.conf
Add the following
Fs.file-max=655360
Vm.max_map_count=262144
Vim / etc/security/limits.conf
Add the following
* soft nproc 204800
* hard nproc 204800
* soft nofile 655360
* hard nofile 655360
* soft memlock unlimited
* hard memlock unlimited
Vim / etc/security/limits.d/20-nproc.conf
Modify the content to
* soft nproc 40960
Root soft nproc unlimited
Execute the following command to take effect
Sysctl-p
Logout system, re-login, and execute ulimit-a to view the tuning results
Ulimit-a
JVM tuning
Vim / usr/local/elasticsearch/config/jvm.options
The values of Xms and Xmx are recommended to be set to half of physical memory. (this is all set to 1GB)
Create a data storage directory for elasticsearch
Mkdir-p / data1/elasticsearch
Mkdir-p / data2/elasticsearch
Chown-R elasticsearch:elasticsearch / data1/elasticsearch
Chown-R elasticsearch:elasticsearch / data2/elasticsearch
Configure elasticsearch
Vim / usr/local/elasticsearch/config/elasticsearch.yml
Add the following
Cluster.name: es
Node.name: c176
Node.master: true
Node.data: true
Path.data: / data1/elasticsearch,/data2/elasticsearch
Path.logs: / usr/local/elasticsearch/logs
Bootstrap.memory_lock: true
Network.host: 192.168.199.176
Http.port: 9200
Discovery.zen.minimum_master_nodes: 1
Discovery.zen.ping_timeout: 3s
Discovery.zen.ping.unicast.hosts: ["192.168.199.176VR 9300", "192.168.199.177VR 9300", "192.168.199.178VR 9300")
Start the elasticsearch cluster, # Note: make sure that the authorized users of / usr/local/elasticsearch and / data1/elastisearch and / data2/elasticsearch are elasticsearch users; when starting the elasticsearch service, you must switch to the elasticsearch user
Su-elasticsearch
/ usr/local/elasticsearch/bin/elasticsearch-d
Verify that the elasticsearch service starts properly
Jps
Curl http://192.168.199.176:9200
The following output indicates that the service is normal
Log files about elasticsearch
/ usr/local/elasticsearch/logs/XXX.log (# # XXX represents the name of cluster.name)
If the service cannot be started normally, it will be analyzed, judged and solved according to the log error report.
Do the same elasticsearch installation and configuration steps for c177 and c178 servers
Installation and configuration of ZooKeeper
Version: zookeeper-3.4.12
Servers involved (c172 to c174)
Here, take c172 installation and configuration of ZooKeeper as an example
Unzip and rename the home directory
Tar zxvf zookeeper-3.4.12.tar.gz-C / usr/local/
Mv zookeeper-3.4.12 zookeeper
Copy zoo_sample.cfg and rename it zoo.cfg
Cd / usr/local/zookeeper/conf
Cp zoo_sample.cfg zoo.cfg
Create a data directory for ZooKeeper
Mkdir-p / data/zookeeper
Create a myid file with a content of 1
Echo "1" > / data/zookeeper/myid (# # Note that the corresponding value of c173 is 2, the value of c174 is 3;)
Configure zookeeper
Vim / usr/local/zookeeper/conf/zoo.cfg
Add the following
TickTime=2000
InitLimit=10
SyncLimit=5
DataDir=/data/zookeeper
ClientPort=2181
Server.1=192.168.199.172:2888:3888
Server.2=192.168.199.173:2888:3888
Server.3=192.168.199.174:2888:3888
Start the ZooKeeper service and verify that the service starts properly through jps or ps-ef
ZkServer.sh start
Jps
Ps-ef | grep zookeeper
Note:
The log location of ZooKeeper is in the same directory as when zookeeper was started, and the name is zookeeper.out
Complete the same zookeeper installation configured on c173, c174 server.
Installation and configuration of Kafka
Version: kafka_2.10-0.10.0.1
Servers involved (c172 to c174)
Here, take c172 installation and configuration of ZooKeeper as an example
Note: the version used by Filebeat (filebeat-6.3.2) supports the version of Kafka that should be supported.
Https://www.elastic.co/guide/en/beats/filebeat/6.3/kafka-output.html
Get the kafka installation package
Wget https://archive.apache.org/dist/kafka/0.10.0.1/kafka_2.10-0.10.0.1.tgz
Decompress, rename
Tar zxvf kafka_2.10-0.10.0.1.tgz-C / usr/local/
Mv kafka_2.10-0.10.0.1 kafka
Create a kafka log directory
Mkdir-p / kafka-logs
Configure kafka Profil
Vim / usr/local/kafka/config/server.properties
Key configuration project content (c172 as an example)
Broker.id=1
Listeners=PLAINTEXT://192.168.199.172:9092
Log.dirs=/kafka-logs
Num.partitions=6
Log.retention.hours=60
Log.segment.bytes=1073741824
Zookeeper.connect=192.168.199.172:2181192.168.199.173:2181192.168.199.174:2181
Auto.create.topics.enable=true
Delete.topic.enable=true
Start kafka (nohup. &: run in the background)
Nohup kafka-server-start.sh / usr/local/kafka/config/server.properties &
Detect kafka service
Jps
The log file nohup.out for the generated kafka (generated in the same directory as when kafka startup was performed)
The same steps are used to complete the installation and configuration of kafka for c173 and c174 servers
Kafka basic command operation
Show topic list
Kafka-topics.sh-- zookeeper c172 list 2181 list
Create a topic and specify the topic attribute (number of copies, number of partitions, etc.)
Kafka-topics.sh-- create-- zookeeper c172 zookeeper 2181 replication-factor-- partitions 3-- topic xyz
Check the status of a topic
Kafka-topics.sh-- describe-- zookeeper c172 zookeeper 2181 Magnum c173 zookeeper 2181 topic xyz
Production message
Kafka-console-producer.sh-- broker-list c172 topic xyz 9092 topic xyz
Consumption message
Kafka-console-consumer.sh-- zookeeper c172 topic xyz 2181 topic xyz
Kafka-console-consumer.sh-- zookeeper c172 topic xyz 2181-- from beginning
Delete topic
Kafka-topics.sh-- zookeeper c172 delete 2181-- topic xyz
Installation and configuration of Filebeat
Version: filebeat-6.3.2
Server c171 involved
Unpack installation, rename
Tar zxvf filebeat-6.3.2-linux-x86_64.tar.gz-C / usr/local/
Mv filebeat-6.3.2-linux-x86_64 filebeat
There are two ways to configure filebeat:
Traditional method: modify the configuration file of filebeat.yml
Module configuration: various modules are included in the modules.d directory
List of modules supported under modules.d
Here we take modifying the filebeat.yml configuration as an example
# INPUT#
Filebeat.inputs:
Type: log
Enabled: true
Paths:/var/log/messages/var/log/secure
Fields:
Log_topic: osmessages
Name: "192.168.199.171"
# OUTPUT#
Output.kafka:
Enabled: true
Hosts: ["192.168.199.172rig 9092", "192.168.199.173vir 9092", "192.168.199.174vir 9092")
Version: "0.10"
Topic:'% {[fields.log_topic]}'
Partition.round_robin:
Reachable_only: true
Worker: 2
Required_acks: 1
Compression: gzip
Max_message_bytes: 10000000
Logging.level: debug
Start filebeat
Nohup. / filebeat-e-c filebeat.yml &
View output
Tail-f nohup.out
An example of data filtering, and then view the results through nohup.out
Processors:
-drop_fields:
Fields: ["beat", "host", "input", "source", "offset", "prospector"]
Verify that the log from filebeat is passed to kafka
Execute on kafka's server: (topic with an osmessages)
Consumption osmessages
Execute the following command
Test: ssh to c171 (filebeat) on any server, followed by the following output on the above screen, indicating that the kafka server obtained data from filebeat's server and consumed
Installation and configuration of Logstash
Version: logstash-6.3.2
Server c175 involved
Unpack the installation package and rename it
Tar zxvf logstash-6.3.2.tar.gz-C / usr/local/
Mv logstash-6.3.2 logstash
Logstatsh only does three things.
Receive data (input), analyze filter (filter), output data (output)
Through the plug-in mechanism to achieve, filter is not necessary, input,output is necessary
Plug-in: https://github.com/logstash-plugins
Input plug-in: receiving data
Filter plug-in: filtering data
Output plug-ins: exporting data
Logstash profile
Jvm.options: JVM memory resource profile
Logstash.yml:logstash global configuration file
Logstash event profiles: manually created
. / logstash-e 'input {stdin {}} output {stdout {codec= > rubydebug}}'
Or let logstash implement it by calling the configuration file.
Vim testfile.conf
Bin/logstash-f testfile.conf
Or run it in the background to view the log of nohup.out.
Nohup bin/logstash-f testfile.conf &
The logstatsh in this example receives data from kafka and outputs it to elasticsearch. Create a configuration file for logstash
Vim kafka_io_es.conf
Call and execute
Nohup. / bin/logstash-f. / config/kafka_io_es.conf &
Tail-f nohup.out
Installation and configuration of Kibana
Version: kibana-6.3.2
Server c178 involved
Unpack the installation package and rename it
Zxvf kibana-6.3.2-linux-x86_64.tar.gz-C / usr/local
Mv kibana-6.3.2-linux-x86_64 kibana
Configuration file
Vim / usr/local/kibana/config/kibana.yml
Enter the following
Server.port: 5601
Server.host: "192.168.199.178"
Elasticsearch.url: "http://192.168.199.176:9200"
Kibana.index: ".kibana"
Start the kibana service
Nohup / usr/local/kibana/bin/kibana &
View the log
Tail-f nohup.out
View service processes
Ps-ef | grep node
Open a browser on the client side for input
Http://192.168.199.178:5601
Summarize the examples of how each service is started.
Nohup. / filebeat-e-c filebeat.yml &
ZkServer.sh start
Nohup kafka-server-start.sh / usr/local/kafka/config/server.properties &
Nohup. / bin/logstash-f. / config/kafka_io_es.conf
Elasticsearch-d
Nohup / usr/local/kibana/bin/kibana &
+ +
It is necessary to deeply understand the flow of data and the functions provided by various services.
Key points and difficulties: plug-in configuration of logstash (input, filter,output)
+ +
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.