Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

How to understand Container deployment ELK7.10

2025-02-25 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Development >

Share

Shulou(Shulou.com)06/03 Report--

This article mainly introduces "how to understand container deployment ELK7.10". In daily operation, I believe many people have doubts about how to understand container deployment ELK7.10. The editor consulted all kinds of materials and sorted out simple and easy-to-use operation methods. I hope it will be helpful to answer the doubts of "how to understand container deployment ELK7.10"! Next, please follow the editor to study!

A brief introduction to the elk architecture

First of all, logstash has log collection, filtering, filtering and other functions, the function is perfect, but at the same time, the volume will be relatively large, and naturally consume a lot of system resources. Filebeat as a lightweight log collection tool, although there is no filtering function, but only deployed in the application server as our log collection tool can be said to be the best choice. But sometimes we may need the filtering function of logstash, so we use filebeat when collecting logs, and then give it to logstash to filter.

Secondly, the throughput of logstash is limited. Once too many logs sent by filebeat in a short period of time will accumulate and block, and the collection of logs will also be affected, so a layer of kafka message queue is added between filebeat and logstash to cache or decouple. Of course, redis is also possible. In this way, when many filebeat nodes collect a large number of logs and put them directly into the kafka, the logstash consumes slowly, and the two sides do not interfere with each other.

As for zookeeper, distributed service management artifact, node registration for monitoring and managing kafka, topic management, etc. at the same time, it makes up for the problem that kafka cluster nodes are not aware of the outside world. Kafka actually has its own zookeeper, and an independent zookeeper will be used for management here to facilitate the expansion of zookeeper clusters in the future.

II. Environment

Aliyun ECS:5 deploys ES nodes, and 3 deploy logstash, kafka, zookeeper and kibana services respectively.

Aliyun ECS configuration: 5 4-core 16G SSD disks. 3 4-core 16G SSD disks. They are all Centos7.8 systems.

Install docker and docker-compose

ELK version 7.10.1 bot zookeeper version 3.6.2 witch Kafka version 2.13-2.6.0

III. Optimization of system parameters

# maximum number of processes opened by users $vim / etc/security/limits.d/20-nproc.conf * soft nproc 65535 * hard nproc 65535 # optimized kernel for docker support $modprobe br_netfilter $cat > / etc/sysctl.conf # effective configuration $sysctl-p

IV. Deploy docker and docker-compose

Deploy docker

# install some necessary system tools $yum install-y yum-utils device-mapper-persistent-data lvm2 # add software source information $yum-config-manager-- add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo # update and install Docker-CE $yum makecache fast $yum- y install docker-ce # configure docker $systemctl enable docker $systemctl start docker $vim / etc/docker/daemon.json {"data-root": "/ var/lib/docker" "bip": "10.50.0.1 log-opts 16", "default-address-pools": [{"base": "10.51.0.1 max-size 16", "size": 24}], "registry-mirrors": ["https://4xr1qpsp.mirror.aliyuncs.com"]," log-opts ": {" max-size ":" 500m " "max-file": "3"} $sed-I'/ ExecStart=/i ExecStartPost=\ / sbin\ / iptables-P FORWARD ACCEPT' / usr/lib/systemd/system/docker.service $systemctl enable docker.service $systemctl daemon-reload $systemctl restart docker

Deploy docker-compose

# install docker-compose $sudo curl-L "https://github.com/docker/compose/releases/download/1.27.4/docker-compose-$(uname-s)-$(uname-m)"-o / usr/local/bin/docker-compose $chmod + x / usr/local/bin/docker-compose

5. Deploy ES

Es-master1 operation

# create es directory $mkdir / data/ELKStack $mkdir elasticsearch elasticsearch-data elasticsearch-plugins # Container es user uid and gid are both 1000$ chown 1000.1000 elasticsearch-data elasticsearch-plugins # temporarily launch an es $docker run-- name es-test-it-- rm docker.elastic.co/elasticsearch/elasticsearch:7.10.1 bash # to generate a certificate with a validity period of 10 years The password entered by the certificate is empty $bin/elasticsearch-certutil ca-- days 3660$ bin/elasticsearch-certutil cert-ca elastic-stack-ca.p12-- days 3660 # opens a new window Copy the generated certificate $cd / data/ELKStack/elasticsearch $mkdir es-p12 $docker cp es-test:/usr/share/elasticsearch/elastic-certificates.p12. / es-p12 $docker cp es-test:/usr/share/elasticsearch/elastic-stack-ca.p12. / es-p12 $chown-R 1000.1000. / es-p12 # create docker-compose.yml $vim docker-compose.yml version: '2.2' services: elasticsearch: image: docker.elastic. Co/elasticsearch/elasticsearch:7.10.1 container_name: es01 environment:-cluster.name=es-docker-cluster-cluster.initial_master_nodes=es01 Es02 Es03-bootstrap.memory_lock=true-"ES_JAVA_OPTS=-Xms10000m-Xmx10000m" ulimits: memlock: soft:-1 hard:-1 nofile: soft: 65536 hard: 65536 mem_limit: 13000m cap_add:-IPC_LOCK restart: always # set docker host network mode network_mode: "host" Volumes:-/ data/ELKStack/elasticsearch-data:/usr/share/elasticsearch/data-/ data/ELKStack/elasticsearch-plugins:/usr/share/elasticsearch/plugins-/ data/ELKStack/elasticsearch/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml-/ data/ELKStack/elasticsearch/es-p12:/usr/share/elasticsearch/config/es-p12 # create an elasticsearch.yml configuration file $vim elasticsearch.yml cluster.name: "es-docker-cluster" node.name: "es01" network.host: 0.0.0.0 node.master: true node.data: true discovery.zen.minimum_master_nodes: 2 http.port: 9200 transport.tcp.port: 9300 # if it is multi-node es Health check discovery.zen.ping.unicast.hosts through ping: ["172.20.166.25 9300", "172.20.166.24 9300", "172.20.166.22 9300", "172.20.166.23 9300" "172.20.166.26 cluster.info.update.interval 9300"] discovery.zen.fd.ping_timeout: 120s discovery.zen.fd.ping_retries: 6 discovery.zen.fd.ping_interval: 10s cluster.info.update.interval: 1m indices.fielddata.cache.size: 20% indices.breaker.fielddata.limit: 40% indices.breaker.request.limit: 40% indices.breaker.total.limit: 70% indices.memory.index_buffer_size: 20% script.painless.regex.enabled: true # Disk sharding allocation cluster.routing.allocation.disk.watermark.low: 100gb cluster.routing.allocation.disk.watermark.high: 50gb cluster.routing.allocation.disk.watermark.flood_stage: 30gb # Local data sharding recovery configuration gateway.recover_after_nodes: 3 gateway.recover_after_time: 5m gateway.expected_nodes: 3 cluster.routing.allocation.node_initial_primaries_recoveries: 8 cluster.routing.allocation.node_concurrent_recoveries: 2 # Cross-domain allowed Request http.cors.enabled: true http.cors.allow-origin: "*" http.cors.allow-headers: Authorization XmurRequestedMushWithjue ContentripLength Content-Type # enable xpack xpack.security.enabled: true xpack.monitoring.collection.enabled: true # enable https transfer in the cluster xpack.security.transport.ssl.enabled: true xpack.security.transport.ssl.verification_mode: certificate xpack.security.transport.ssl.keystore.path: es-p12/elastic-certificates.p12 xpack.security.transport.ssl.truststore.path: es-p12/elastic-certificates.p12 # synchronize the es configuration to other es nodes using rsync $rsync-avp-e ssh / data/ELKStack 172.20.166.24:/data/ $rsync-avp-e ssh / data/ELKStack 172.20.166.22:/data/ $rsync-avp-e ssh / data/ELKStack 172.20.166.23:/data/ $rsync-avp-e ssh / data/ELKStack 172.20.166.26:/data/ # launch es $docker-compose up-d # to view es $docker-compose ps

Es-master2 operation

$cd / data/ELKStack/elasticsearch # modify two configurations of docker-compose.yml elasticsearch.yml $sed-I's docker-compose.yml elasticsearch.yml es01max es $docker-compose up-d

Es-master3 operation

$cd / data/ELKStack/elasticsearch # modify two configurations of docker-compose.yml elasticsearch.yml $sed-I's docker-compose.yml elasticsearch.yml es01According to es $docker-compose up-d

Es-data1 operation

$cd / data/ELKStack/elasticsearch # modify the two configurations of docker-compose.yml elasticsearch.yml $sed-I _ docker-compose.yml elasticsearch.yml _ 01Universe _

Es-data2 operation

$cd / data/ELKStack/elasticsearch # modify two configurations of docker-compose.yml elasticsearch.yml $sed-I _ s/node.master _ 01 _ docker-compose.yml elasticsearch.yml # not as an es master node, but as a data node $sed-I 's/node.master: true/node.master: false/g' elasticsearch.yml # launch es $docker-compose up-d

Set up es access account

# es-master1 operation $docker exec-it es01 bash # set elastic,apm_system,kibana,kibana_system,logstash_system,beats_system,remote_monitoring_user and other passwords # passwords are set to elastic123. Here is just an example. Set $. / bin/elasticsearch-setup-passwords interactive according to your requirements.

VI. Deploy Kibana

Logstash4 operation

$mkdir-p / data/ELKStack/kibana $cd / data/ELKStack/kibana # create kibana related directories For container mounting $mkdir config data plugins $chown 1000.1000 config data plugins # create docker-compose.yml $vim docker-compose.yml version:'2' services: kibana: image: docker.elastic.co/kibana/kibana:7.10.1 container_name: kibana restart: always network_mode: "bridge" mem_limit: 2000m environment: SERVER_NAME: kibana.example.com ports:- "5601 data/ELKStack/kibana/config:/usr/share/kibana/config 5601" volumes:-/ data/ELKStack/kibana/config:/usr/share/kibana/config-/ data/ELKStack/kibana/data:/usr/share/kibana/data-/ data/ELKStack/kibana/plugins:/usr/share/kibana/plugins # create kibana.yml $vim config/kibana.yml server.name: kibana server.host: "0" elasticsearch.hosts: ["http://172.20.166.25:9200", "http://172.20.166.24:9200","http://172.20.166.22:9200"] elasticsearch.username:" kibana "elasticsearch.password:" elastic123 "monitoring.ui.container.elasticsearch.enabled: true xpack.security.enabled: true xpack.encryptedSavedObjects.encryptionKey: encryptedSavedObjects12345678909876543210 xpack.security.encryptionKey: encryptionKeysecurity12345678909876543210 xpack.reporting.encryptionKey: encryptionKeyreporting12345678909876543210 i18n.locale:" zh-CN "# launch kibana $docker-compose up-d

7. Deploy Zookeeper

Logstash2 operation

# create the zookeeper directory $mkdir / data/ELKStack/zookeeper $cd / data/ELKStack/zookeeper $mkdir data datalog $chown 1000.1000 data datalog # create docker-compose.yml $vim docker-compose.yml version:'2' services: zoo1: image: zookeeper:3.6.2 restart: always hostname: zoo1 container_name: zoo1 network_mode: "bridge" mem_limit: 2000m ports:-2181 -3888 volumes:-/ data/ELKStack/zookeeper/data:/data-/ data/ELKStack/zookeeper/datalog:/datalog-/ data/ELKStack/zookeeper/zoo.cfg:/conf/zoo.cfg environment: ZOO_MY_ID: 1 # indicates the id of the ZK service It is an integer between 1 and 255. it must be unique in the cluster ZOO_SERVERS: server.1=0.0.0.0:2888:3888 2181 server.2=172.20.166.28:2888:3888;2181 server.3=172.20.166.29:2888:3888;2181 # ZOOKEEPER_CLIENT_PORT: 2181 # create zoo.cfg configuration $vim zoo.cfg tickTime=2000 initLimit=10 syncLimit=5 dataDir=/data dataLogDir=/datalog autopurge.snapRetainCount=3 autopurge.purgeInterval=1 maxClientCnxns=60 server.1= 0.0.0.0purl 2888purl 3888witt2181 server.2=172.20.166.28:2888:3888;2181 server.3=172.20.166.29:2888:3888 2181 # copy the configuration to the logstash3 logstash4 machine $rsync-avp-e ssh / data/ELKStack/zookeeper 172.20.166.28:/data/ELKStack/ $rsync-avp-e ssh / data/ELKStack/zookeeper 172.20.166.29:/data/ELKStack/ # launch zookeeper $docker-compose up-d

Logstash3 operation

$cd / data/ELKStack/zookeeper # modify the docker-compose.yml file $vim docker-compose.yml version:'2' services: zoo2: image: zookeeper:3.6.2 restart: always hostname: zoo2 container_name: zoo2 network_mode: "bridge" mem_limit: 2000m ports:-2181-3888 volumes:-/ Data/ELKStack/zookeeper/data:/data-/ data/ELKStack/zookeeper/datalog:/datalog-/ data/ELKStack/zookeeper/zoo.cfg:/conf/zoo.cfg environment: ZOO_MY_ID: 2 # indicates the id of the ZK service It is an integer between 1 and 255. it must be unique in the cluster ZOO_SERVERS: server.1=172.20.166.27:2888:3888 2181 server.2=0.0.0.0:2888:3888;2181 server.3=172.20.166.29:2888:3888;2181 # ZOOKEEPER_CLIENT_PORT: 2181 # modify zoo.cfg $vim zoo.cfg tickTime=2000 initLimit=10 syncLimit=5 dataDir=/data dataLogDir=/datalog autopurge.snapRetainCount=3 autopurge.purgeInterval=1 maxClientCnxns=60 server.1= 172.20.166.27 server.2=0.0.0.0:2888:3888;2181 server.3=172.20.166.29:2888:3888;2181 # launch zookeeper $docker-compose up-d

Logstash4 operation

$cd / data/ELKStack/zookeeper # modify the docker-compose.yml file $vim docker-compose.yml version:'2' services: zoo3: image: zookeeper:3.6.2 restart: always hostname: zoo3 container_name: zoo3 network_mode: "bridge" mem_limit: 2000m ports:-2181-3888 volumes:-/ Data/ELKStack/zookeeper/data:/data-/ data/ELKStack/zookeeper/datalog:/datalog-/ data/ELKStack/zookeeper/zoo.cfg:/conf/zoo.cfg environment: ZOO_MY_ID: 3 # indicates the id of the ZK service It is an integer between 1 and 255. it must be unique in the cluster ZOO_SERVERS: server.1=172.20.166.27:2888:3888 2181 server.2=172.20.166.28:2888:3888;2181 server.3=0.0.0.0:2888:3888;2181 # ZOOKEEPER_CLIENT_PORT: 2181 # modify zoo.cfg $vim zoo.cfg tickTime=2000 initLimit=10 syncLimit=5 dataDir=/data dataLogDir=/datalog autopurge.snapRetainCount=3 autopurge.purgeInterval=1 maxClientCnxns=60 server.1= 172.20.166.27 server.2=172.20.166.28:2888:3888;2181 server.3=0.0.0.0:2888:3888 2888 server.2=172.20.166.28:2888:3888;2181 server.3=0.0.0.0:2888:3888 2181 # start zookeeper $docker-compose up-d # operate zookeeper $docker run-it zoo3 bash $zkCli.sh-server 172.20.166.27 zookeeper 2181172.20.166.28 Fraser 2181172.20.166.29

VIII. Deploy Kafka

Logstash2 operation

# create a kafka directory $mkdir-p / data/ELKStack/kafka $cd / data/ELKStack/kafka # create a data directory Used to store kafka container data $mkdir data # copy the kafka configuration to the host machine $docker run-- name kafka-test-it-- rm wurstmeister/kafka:2.13-2.6.0 bash $cd / opt/kafka $tar zcvf / tmp/config.tar.gz config # Open a new window $docker cp kafka-test:/tmp/config.tar.gz. / # extract the configuration file $tar xf config.tar.gz # Create docker-compose.yml $vim docker-compose.yml version:'2' services: kafka1: image: wurstmeister/kafka:2.13-2.6.0 restart: always hostname: kafka1 container_name: kafka1 network_mode: "bridge" mem_limit: 5120m ports:-9092 KAFKA_BROKER_ID: 1 KAFKA_ADVERTISED_LISTENERS: PLAINTEXT:/ / 172.20.166.27 9092 # the IP address of the host instead of the IP of the container And exposed port KAFKA_ADVERTISED_HOST_NAME: 172.20.166.27 # Public network access address KAFKA_ADVERTISED_PORT: 9092 # Port KAFKA_ZOOKEEPER_CONNECT: 172.20.166.27 KAFKA_ZOOKEEPER_CONNECT 2181172.20.166.28 KAFKA_ZOOKEEPER_CONNECT 2181172.20.166.29 Connected zookeeper service and port KAFKA_JMX_OPTS: "- Dcom.sun.management.jmxremote-Dcom.sun.management.jmxremote.authenticate=false-Dcom.sun.management.jmxremote.ssl=false-Djava.rmi.server.hostname=172.20.166.27-Dcom.sun.management.jmxremote.rmi.port=9966" JMX_PORT: 9966 # kafka when monitoring broker and topic data Is the KAFKA_HEAP_OPTS that needs to turn on jmx_port: "- Xmx4096M-Xms4096M" volumes:-/ data/ELKStack/kafka/data:/kafka # kafka data file storage directory-/ data/ELKStack/kafka/config:/opt/kafka/config # optimize kafka server.properties configuration $vim config/server.properties # increase socket How long does it take to keep socket.send.buffer.bytes=1024000 socket.receive.buffer.bytes=1024000 socket.request.max.bytes=1048576000 # topic data to prevent errors? default 168h (7day) log.retention.hours=72 log.cleanup.policy=delete # copy configuration to logstash3 logstash4 machine $rsync-avp-e ssh / data/ELKStack/kafka 172.20.166.28:/data/ELKStack/ $rsync-avp-e ssh / data/ELKStack/kafka 172.20.166.29:/data/ELKStack/ # launch kafka $docker-compose up-d

Logstash3 operation

$cd / data/ELKStack/kafka # modify the docker-compose.yml file $vim docker-compose.yml version:'2' services: kafka2: image: wurstmeister/kafka:2.13-2.6.0 restart: always hostname: kafka2 container_name: kafka2 network_mode: "bridge" mem_limit: 5120m ports:-9092 kafka2 container_name 9092-9966 environment: KAFKA_BROKER_ID: 2 KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://172.20.166.28:9092 # the IP address of the host instead of the IP of the container And exposed port KAFKA_ADVERTISED_HOST_NAME: 172.20.166.28 # Public network access address KAFKA_ADVERTISED_PORT: 9092 # Port KAFKA_ZOOKEEPER_CONNECT: 172.20.166.27 KAFKA_ZOOKEEPER_CONNECT 2181172.20.166.28 KAFKA_ZOOKEEPER_CONNECT 2181172.20.166.29 Connected zookeeper service and port KAFKA_JMX_OPTS: "- Dcom.sun.management.jmxremote-Dcom.sun.management.jmxremote.authenticate=false-Dcom.sun.management.jmxremote.ssl=false-Djava.rmi.server.hostname=172.20.166.28-Dcom.sun.management.jmxremote.rmi.port=9966" JMX_PORT: 9966 # kafka when monitoring broker and topic data Is the KAFKA_HEAP_OPTS that needs to enable jmx_port: "- Xmx4096M-Xms4096M" volumes:-/ data/ELKStack/kafka/data:/kafka # kafka data file storage directory-/ data/ELKStack/kafka/config:/opt/kafka/config # launch kafka $docker-compose up-d

Logstash4 operation

$cd / data/ELKStack/kafka # modify the docker-compose.yml file $vim docker-compose.yml version:'2' services: kafka3: image: wurstmeister/kafka:2.13-2.6.0 restart: always hostname: kafka3 container_name: kafka3 network_mode: "bridge" mem_limit: 5120m ports:-9092 kafka3 container_name 9092-9966 environment: KAFKA_BROKER_ID: 3 KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://172.20.166.29:9092 # the IP address of the host instead of the IP of the container And exposed port KAFKA_ADVERTISED_HOST_NAME: 172.20.166.29 # Public network access address KAFKA_ADVERTISED_PORT: 9092 # Port KAFKA_ZOOKEEPER_CONNECT: 172.20.166.27 KAFKA_ZOOKEEPER_CONNECT 2181172.20.166.28 KAFKA_ZOOKEEPER_CONNECT 2181172.20.166.29 Connected zookeeper service and port KAFKA_JMX_OPTS: "- Dcom.sun.management.jmxremote-Dcom.sun.management.jmxremote.authenticate=false-Dcom.sun.management.jmxremote.ssl=false-Djava.rmi.server.hostname=172.20.166.29-Dcom.sun.management.jmxremote.rmi.port=9966" JMX_PORT: 9966 # kafka when monitoring broker and topic data Is the KAFKA_HEAP_OPTS that needs to open jmx_port: "- Xmx4096M-Xms4096M" volumes:-/ data/ELKStack/kafka/data:/kafka # kafka data file storage directory-/ data/ELKStack/kafka/config:/opt/kafka/config # launch kafka $docker-compose up-d # deploy kafka-manager management kafka platform $mkdir / data/ELKStack/kafka- Manager $cd / data/ELKStack/kafka-manager $vim docker-compose.yml version: '3.6' services: kafka_manager: restart: always container_name: kafa-manager hostname: kafka-manager network_mode: "bridge" mem_limit: 1024m image: hlebalbau/kafka-manager:3.0.0.5-7e7a22e ports:-"9000 vim docker-compose.yml version" environment: ZK_HOSTS: "172. 20.166.27 KAFKA_MANAGER_AUTH_ENABLED 2181172.20.166.28 KAFKA_MANAGER_AUTH_ENABLED 2181 "APPLICATION_SECRET:" random-secret "KAFKA_MANAGER_AUTH_ENABLED:" true "KAFKA_MANAGER_USERNAME: admin KAFKA_MANAGER_PASSWORD: elastic123 JMX_PORT: 9966 TZ:" Asia/Shanghai "# launch kafka-manager $docker-compose up-d # to visit http://172.20.166.29:9000 Add the three kafka created above to the management. I am not going to elaborate here. There are many configuration tutorials on the Internet.

IX. Deploy logstash

Logstash2 operation

$mkdir / data/ELKStack/logstash $cd / data/ELKStack/logstash $mkdir config data $chown 1000.1000 config data # create docker-compose.yml $vim docker-compose.yml version:'2' services: logstash2: image: docker.elastic.co/logstash/logstash:7.10.1 container_name: logstash2 hostname: logstash2 restart: always network_mode: "bridge" mem_limit: 4096m environment: TZ: "Asia / Shanghai "ports:-5044 data/ELKStack/logstash/config:/config-dir-/ data/ELKStack/logstash/logstash.yml:/usr/share/logstash/config/logstash.yml-/ data/ELKStack/logstash/data:/usr/share/logstash/data-/ etc/localtime:/etc/localtime user: logstash command: bash-c" logstash-f / config -dir-- config.reload.automatic "# create logstash.yml $vim logstash.yml http.host:" 0.0.0.0 "# refers to the size of batch requests sent to Elasticsearch The higher the value, the more efficient the processing, but increases the memory overhead pipeline.batch.size: 3000 # refers to the delay of adjusting the Logstash pipe After that time, logstash starts to execute filter and output pipeline.batch.delay: 200 # create logstash rule configuration $vim config/01-input.conf input {# input component kafka {# consume data from kafka bootstrap_servers = > ["172.20.166.27 logstash 9092172 .20.166.28: 9092172.20.166.29topic 9092 "] # topics = >"% {[@ metadata] [topic]} "# using topic topics_pattern passed by kafka >" elk-.* "# using regular matching topic codec = >" json "# data format consumer_threads = > 3 # number of consumption threads decorate_events = > true # add Kafka metadata to events For example, options for subject and message size will add a field named kafka auto_offset_reset = > "latest" # to the logstash event to automatically reset the offset to the latest offset group_id = > "logstash-node" # consumer group ID Multiple logstash instances with the same group_id are a consumer group client_id = > "logstash2" # client ID fetch_max_wait_ms = > "1000" # when there is not enough data to satisfy fetch_min_bytes immediately The server outputs the maximum duration of blocking}} $vim config/02-output.conf output {# Logstash to es hosts = > ["172.20.166.25 elasticsearch 9200", "172.20.166.24 elasticsearch 9200", "172.20.166.22 elasticsearch 9200" before answering the server request. "172.20.166.23 source 9200", "172.20.166.26 source 9200"] index = > "% {[YYYY-MM-dd]} -% {+ YYYY-MM-dd}" # match directly in the log The index will remove elk # index = > "% {[@ metadata] [topic]} -% {+ YYYY-MM-dd}" # Index by date user = > "elastic" password = > "elastic123"} # stdout {# codec = > rubydebug #}} $vim config/03-filter.conf filter {# when the non-business field If there is no traceId, remove the if ([message] = ~ "traceId=null") {# filter component. This is just a demonstration and has no practical significance. Filter drop {} # copy configuration to logstash3 logstash4 machine $rsync-avp-e ssh / data/ELKStack/logstash 172.20.166.28:/data/ELKStack/ $rsync-avp-e ssh / data/ELKStack/logstash 172.20.166.29:/data/ELKStack/ # launch logstash $docker-compose up-d according to your business needs

Logstash3 operation

$cd / data/ELKStack/logstash $sed-I's sed logstash # launch logstash $logstash2 docker-compose up-d

Logstash4 operation

$cd / data/ELKStack/logstash $sed-I's docker-compose up docker-compose up 2 logstash logstash $docker-compose up-d launch logstash4 config/01-input.conf #

10. Deploy filebeat

# configure filebeat yum source Here take centos7 as an example $rpm--import https://artifacts.elastic.co/GPG-KEY-elasticsearch $vim / etc/yum.repos.d/elastic.repo [elastic-7.x] name=Elastic repository for 7.x packages baseurl= https://artifacts.elastic.co/packages/7.x/yum gpgcheck=1 gpgkey= https://artifacts.elastic.co/GPG-KEY-elasticsearch enabled=1 autorefresh=1 type=rpm-md $yum install-y filebeat-7.10.1$ systemctl enable filebeat # configure $cd / etc/filebeat / $cp-a filebeat.yml filebeat.yml.old $echo > filebeat.yml # take collecting nginx access logs as an example $vim filebeat.yml filebeat.inputs: # inputs is plural The table name type can have multiple-type: log # input type access: enabled: true # enable this type configuration json.keys_under_root: true # default this value is FALSE, that is, our json log will be parsed on the json key. If you set it to TRUE, all keys will be placed in the root node json.overwrite_keys: true # whether to overwrite the original key, which is the key configuration. Set keys_under_root to TRUE, and then set overwrite_keys to TRUE. You can set the default key value of filebeat to override the size limit of max_bytes: 20480 # single log. The recommended limit is 10m by default. Queue.mem.events * max_bytes will be part of the memory) paths:-/ var/log/nginx/access.log # Monitoring nginx's access log fields: # extra field source: nginx-access-prod # Custom source field For es recommended indexes (field names in lowercase I remember that the index of the custom es needs to be set to false setup.ilm.enabled: false output.kafka: # output to kafka enabled: true # whether hosts is enabled for this output configuration: ["172.20.166.27 kafka enabled 9092", "172.20.166.28 kafka enabled 9092" "172.20.166.29 fields.source 9092"] # kafka Node list topic: "elk-% {[fields.source]}" # kafka will create the topic Then logstash (which can filter modifications) is passed to es as the index name partition.hash: reachable_only: true # whether to send only to the reachable partition compression: gzip # Compression max_message_bytes: 1000000 # Event maximum number of bytes. The default is 1000000. Should be less than or equal to kafka broker message.max.bytes value required_acks: 1 # kafka ack level worker: 1 # kafka output maximum number of concurrency bulk_max_size: 2048 # maximum number of events sent to kafka logging.to_files: true # output all logs to file, default true, when the log file size limit is reached, the log file will automatically limit replacement Detailed configuration: https://www.cnblogs.com/qinwengang/p/10982424.html close_older: 30m # if a file has not been updated within a certain period of time, close the monitored file handle. The default 1h force_close_files: false # option turns off a file when the file name changes. Only when window recommends that the file handle be closed for true # after no new log collection, default is 5 minutes, set to 1 minute, and speed up the file handle to close close_inactive: 1m # forcibly close the file handle if the transfer is not completed after 3 hours of transfer. This configuration item is key point close_timeout: 3h # which solves the above case problem. The configuration item should also be configured. The default value is 0, which means no cleaning. Not cleaning means that the collected file description will never be cleaned in the registry file. After running it for a period of time, the registry will become larger, which may cause problems. Clean_inactive: 72 hours # after setting clean_inactive, you need to set ignore_older, and make sure that ignore_older

< clean_inactive ignore_older: 70h # 限制 CPU和内存资源 max_procs: 1 # 限制一个CPU核心,避免过多抢占业务资源 queue.mem.events: 256 # 存储于内存队列的事件数,排队发送 (默认4096) queue.mem.flush.min_events: 128 # 小于 queue.mem.events ,增加此值可提高吞吐量 (默认值2048) # 启动 filebeat $ systemctl start filebeat 十一、部署 curator,定时清理es索引 logstash4 机器操作 # 参考链接:https://www.elastic.co/guide/en/elasticsearch/client/curator/current/yum-repository.html # 安装 curator 服务,以 centos7 为例 $ rpm --import https://packages.elastic.co/GPG-KEY-elasticsearch $ vim /etc/yum.repos.d/elk-curator-5.repo [curator-5] name=CentOS/RHEL 7 repository for Elasticsearch Curator 5.x packages baseurl=https://packages.elastic.co/curator/5/centos/7 gpgcheck=1 gpgkey=https://packages.elastic.co/GPG-KEY-elasticsearch enabled=1 $ yum install elasticsearch-curator -y # 创建 curator 配置文件目录与输出日志目录 $ mkdir -p /data/ELKStack/curator/logs $ cd /data/ELKStack/curator $ vim config.yml --- # Remember, leave a key empty if there is no value. None will be a string, # # not a Python "NoneType" client: hosts: ["172.20.166.25", "172.20.166.24", "172.20.166.22", "172.20.166.23", "172.20.166.26"] port: 9200 url_prefix: use_ssl: False certificate: client_cert: client_key: ssl_no_validate: False http_auth: elastic:elastic123 timeout: 150 master_only: False logging: loglevel: INFO logfile: /data/ELKStack/curator/logs/curator.log logformat: default blacklist: ['elasticsearch', 'urllib3'] $ vim action.yml --- # Remember, leave a key empty if there is no value. None will be a string, # not a Python "NoneType" # # Also remember that all examples have 'disable_action' set to True. If you # want to use this action as a template, be sure to set this to False after # copying it. actions: 1: action: delete_indices description: >

-Delete indices older than 30 days. Ignore the error if the filter does not result in an actionable list of indices (ignore_empty_list) and exit cleanly. Options: ignore_empty_list: True disable_action: False filters:-filtertype: pattern kind: regex value:'^ ((?! (kibana | json | monitoring | metadata | async | transform | siem | security)).) * $'- filtertype: age source: creation_date direction: older # timestring:'% Yi-%m-%d' unit: days unit_count: 30 2: action: delete_indices Description: >-Delete indices older than 15 days. Ignore the error if the filter does not result in an actionable list of indices (ignore_empty_list) and exit cleanly. Options: ignore_empty_list: True disable_action: False filters:-filtertype: pattern kind: regex value:'^ (nginx-). * $'- filtertype: age source: creation_date direction: older # timestring:'% Yi-%m-%d' unit: days unit_count: 15 # set scheduled tasks to clean up es indexes $ Crontab-e 00 * / usr/bin/curator-- config / data/ELKStack/curator/config.yml / data/ELKStack/curator/action.yml so far The study on "how to understand container deployment ELK7.10" is over. I hope to be able to solve your doubts. The collocation of theory and practice can better help you learn, go and try it! If you want to continue to learn more related knowledge, please continue to follow the website, the editor will continue to work hard to bring you more practical articles!

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Development

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report