Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Deployment process of ELKB5.2.2 cluster environment

2025-01-18 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)06/01 Report--

This article focuses on "the deployment process of ELKB5.2.2 cluster environment". Interested friends may wish to take a look at it. The method introduced in this paper is simple, fast and practical. Let's let the editor take you to learn "the deployment process of ELKB5.2.2 cluster environment".

Deployment of ELKB5.2.2 cluster environment

I have come into contact with ELK 1.4Magol 2.0jue 2.4jue 5.0jue 5.2 version, it can be said that there has not been much feeling in the previous use, recently using 5.2 slowly have a little feeling, it can be seen that cognitive transactions are difficult, this document is as detailed as possible, now writing documents more and more like concise, I do not know is not very good. No, no, no.

Note:

This is a major version of the change, there are a lot of changes, deployment of major changes are as follows:

1Jing filebeat outputs kafka directly, and drop unnecessary fields such as beat-related

2 master cluster layout optimization: there are three ElasticSearch nodes, 6data nodes

3Magol logstash filter joins urldecode to support Chinese display of url, reffer and agent

4Magi logstash fileter joins geoip to support the ip regional city location function of the client

5, logstash mutate replaces the string and remove unnecessary fields such as kafka-related

5The node.js plug-in needs to be deployed separately and cannot be integrated as before.

6 add request parameter and request method to the log of MagneGinx

First, architecture

Optional architecture

Filebeat--elasticsearch--kibana

Filebeat--logstash--kafka--logstash--elasticsearch--kibana

Filebeat--kafka--logstash--elasticsearch--kibana

Because filebeat5.2.2 supports multiple outputs such as logstash, elasticsearch, kafka, redis, syslog, file, etc., in order to optimize resource utilization and support large concurrent scenario selection

Filebeat (18)-kafka (3)-logstash (3)-elasticsearch (3)-kibana (3--nginx load balancing)

There are 3 physical machines, 12 virtual machines and system CentOS6.8, which are divided as follows:

Server 1 (192.168.188.186) kafka1 32G700G4CPUlogstash8G 100G 4CPUelasticsearch2 40G1.4T 8CPUelasticsearch3 40G1.4T 8CPU Server 2 (192.168.188.187) kafka2 32G700G4CPUlogstash8G 100G 4CPUelasticsearch4 40G1.4T 8CPUelasticsearch5 40G1.4T 8CPU Server 3 (192.168.188.188) kafka3 32G700G4CPUlogstash8G 100G 4CPUelasticsearch6 40G1.4T 8CPUelasticsearch7 40G1.4T 8CPU disk Partition Logstach 100GSWAP 8G / boot 200m remaining / Kafka 700g SWAP 8G / boot 200m / 30G remaining / dataElasticsearch 1.4TSWAP 8G / boot 200m / 30G remaining / dataIP Distribution Elasticsearch2-6 192.168.188.191-196kibana1-3 192.168.188.191/193/195kafka1-3 192.168.188.237-239logstash 192.168.188.238198240

Second, environmental preparation

Yum-y remove java-1.6.0-openjdkyum-y remove java-1.7.0-openjdkyum-y remove perl-*yum-y remove sssd-*yum-y install java-1.8.0-openjdkjava-versionyum updatereboot

Required to set up the host environment kafka

Cat / etc/hosts

192.168.188.191 ES191 (master and data) 192.168.188.192 ES192 (data) 192.168.188.193 ES193 (master and data) 192.168.188.194 ES194 (data) 192.168.188.195 ES195 (master and data) 192.168.188.196 ES196 (data) 192.168.188.237 kafka237192.168.188.238 kafka238192.168.188.239 kafka239192.168.188.197 logstash297 192.168.188.198 logstash298192.168.188.240 logstash340

Third, deploy elasticsearch cluster

Mkdir / data/esnginx

Mkdir / data/eslog

Rpm-ivh / srv/elasticsearch-5.2.2.rpm

Chkconfig-add elasticsearch

Chkconfig postfix off

Rpm-ivh / srv/kibana-5.2.2-x86_64.rpm

Chown elasticsearch:elasticsearch / data/eslog-R

Chown elasticsearch:elasticsearch / data/esnginx-R

Configuration file (3master+6data)

[root@ES191 elasticsearch] # cat elasticsearch.yml | grep-Ev'^ # | ^ $'

Cluster.name: nginxlognode.name: ES191node.master: truenode.data: truenode.attr.rack: r1path.data: / data/esnginxpath.logs: / data/eslogbootstrap.memory_lock: truenetwork.host: 192.168.188.191http.port: 9200transport.tcp.port: 9300discovery.zen.ping.unicast.hosts: ["192.168.188.191", "192.168.188.192", "192.168.188.193", "192.168.188.194", "192.168.188.195" "192.168.188.196"] discovery.zen.minimum_master_nodes: 2gateway.recover_after_nodes: 5gateway.recover_after_time: 5mgateway.expected_nodes: 6cluster.routing.allocation.same_shard.host: true script.engine.groovy.inline.search: onscript.engine.groovy.inline.aggs: onindices.recovery.max_bytes_per_sec: 30mbhttp.cors.enabled: truehttp.cors.allow-origin: "*" bootstrap.system_call_filter: requirements below false# kernel 3.0 Centos7 kernel 3.10 is not required

Pay special attention to

/ etc/security/limits.confelasticsearch soft memlock unlimitedelasticsearch hard memlock unlimitedelasticsearch soft nofile 65536elasticsearch hard nofile 131072elasticsearch soft nproc 2048elasticsearch hard nproc 4096/etc/elasticsearch/jvm.options# Xms represents the initial size of total heap space# Xmx represents the maximum size of total heap space-Xms20g-Xmx20g

Start the cluster

Service elasticsearch start

Health examination

Http://192.168.188.191:9200/_cluster/health?pretty=true{ "cluster_name": "nginxlog", "status": "green", "timed_out": false, "number_of_nodes": 6, "number_of_data_nodes": 6, "active_primary_shards": 0, "active_shards": 0, "relocating_shards": 0, "initializing_shards": 0 "unassigned_shards": 0, "delayed_unassigned_shards": 0, "number_of_pending_tasks": 0, "number_of_in_flight_fetch": 0, "task_max_waiting_in_queue_millis": 0, "active_shards_percent_as_number": 100.0}

Elasticsearch-head plug-in

Http://192.168.188.215:9100/

You can connect any of the above 192.168.188.191VR 9200 sets.

Set slicing

It is officially recommended that you set it when you generate the index.

Curl-XPUT 'http://192.168.188.193:9200/_all/_settings?preserve_existing=true'-d' {

"index.number_of_replicas": "1"

"index.number_of_shards": "6"

}'

Did not take effect, later found that this shard setting can be specified when the template is created, currently using the default 1 copy, 5 slices.

Other error reports (this is just a reference, there is a solution for optimization)

Bootstrap.system_call_filter: false # for system call filters failed to install

See https://www.elastic.co/guide/en/elasticsearch/reference/current/system-call-filter-check.html

[WARN] [o.e.b.JNANatives] unable to install syscall filter:

Java.lang.UnsupportedOperationException: seccomp unavailable: requires kernel 3.5 + with CONFIG_SECCOMP and CONFIG_SECCOMP_FILTER compiled in

4. Deploy kafka clusters

Kafka cluster building

1Zero zookeeper cluster

Wget http://mirror.bit.edu.cn/apache/zookeeper/zookeeper-3.4.10/zookeeper-3.4.10.tar.gztar zxvf zookeeper-3.4.10.tar.gz-C / usr/local/ln-s / usr/local/zookeeper-3.4.10/ / usr/local/zookeepermkdir-p / data/zookeeper/data/vim / usr/local/zookeeper/conf/zoo.cfgtickTime=2000initLimit=5syncLimit=2dataDir=/data/zookeeper/dataclientPort=2181server.1=192.168.188.237:2888:3888server.2=192 .168.188.238: 2888:3888server.3=192.168.188.239:2888:3888vim / data/zookeeper/data/myid1/usr/local/zookeeper/bin/zkServer.sh start

2dint Kafka cluster

Wget http://mirrors.hust.edu.cn/apache/kafka/0.10.0.1/kafka_2.11-0.10.0.1.tgz

Tar zxvf kafka_2.11-0.10.0.1.tgz-C / usr/local/

Ln-s / usr/local/kafka_2.11-0.10.0.1 / usr/local/kafka

After diff, the changes of server.properties and zookeeper.properties can not be used directly.

Vim / usr/local/kafka/config/server.properties

Broker.id=237port=9092host.name=192.168.188.237num.network.threads=4 num.io.threads=8socket.send.buffer.bytes=102400socket.receive.buffer.bytes=102400socket.request.max.bytes=104857600log.dirs=/data/kafkalognum.partitions=3num.recovery.threads.per.data.dir=1log.retention.hours=24log.segment.bytes=1073741824log.retention.check.interval.ms=300000log.cleaner.enable=falsezookeeper.connect=192.168.188.237:2181192.168.188.238:2181192.168.188.239:237zookeeper.connection.timeout.ms=6000producer.type=asyncbroker.list=192.168.188.237:9092192 .168.188.238: 9092192.168.188.239:9092

Mkdir / data/kafkalog

Modify memory usage size

Vim / usr/local/kafka/bin/kafka-server-start.sh

Export KAFKA_HEAP_OPTS= "- Xmx16G-Xms16G"

Start kafka

/ usr/local/kafka/bin/kafka-server-start.sh-daemon / usr/local/kafka/config/server.properties

Create six front-end topic

/ usr/local/kafka/bin/kafka-topics.sh-- create-- topic ngx1-16816-- replication-factor 1-- partitions 3-- zookeeper 192.168.188.237Vera 2181192.168.188.238Rose 2181192.168.188.239Vol 2181

/ usr/local/kafka/bin/kafka-topics.sh-- create-- topic ngx2-178-- replication-factor 1-- partitions 3-- zookeeper 192.168.188.237Vera 2181192.168.188.238Rose 2181192.168.188.239Vol 2181

/ usr/local/kafka/bin/kafka-topics.sh-- create-- topic ngx3-188-- replication-factor 1-- partitions 3-- zookeeper 192.168.188.237Vera 2181192.168.188.238Rose 2181192.168.188.239Vol 2181

Check topic

/ usr/local/kafka/bin/kafka-topics.sh-- list-- zookeeper 192.168.188.237Vera 2181192.168.188.238Rose 2181192.168.188.239Rose 2181

Ngx1-168A

Ngx2-178,

Ngx3-188

3, boot up

Cat / etc/rc.local

/ usr/local/zookeeper/bin/zkServer.sh start

/ usr/local/kafka/bin/kafka-server-start.sh-daemon / usr/local/kafka/config/server.properties

Note: if boot startup is set in rc.local and java installation is not openjdk-1.8.0 installed with yum, you need to specify JAVA_HOME, otherwise the java environment will not take effect, and the zookeeper and kafka services affected by the java environment cannot be started, because the java environment is generally configured in / etc/profile, and its effective time is after rc.local.

Fifth, deploy and configure logstash

Installation

Rpm-ivh logstash-5.2.2.rpm

Mkdir / usr/share/logstash/config

# 1. Copy the configuration file to logstash home

Cp / etc/logstash / usr/share/logstash/config

# 2. Configuration path

Vim / usr/share/logstash/config/logstash.yml

Before modification:

Path.config: / etc/logstash/conf.d

After modification:

Path.config: / usr/share/logstash/config/conf.d

# 3. Modify startup.options

Before modification:

LS_SETTINGS_DIR=/etc/logstash

After modification:

LS_SETTINGS_DIR=/usr/share/logstash/config

Modification of startup.options needs to be executed / usr/share/logstash/bin/system-install takes effect

Configuration

The three logstash on the consumer output side are only responsible for part of it.

In-kafka-ngx1-out-es.conf

In-kafka-ngx2-out-es.conf

In-kafka-ngx3-out-es.conf

[root@logstash297 conf.d] # cat in-kafka-ngx1-out-es.conf

Input {kafka {bootstrap_servers = > "192.168.188.237VR 9092192.168.188.238VO9092192.168.188.239RV 9092" group_id = > "ngx1" topics = > ["ngx1-168"] codec = >" json "consumer_threads = > 3 decorate_events = > true}} filter {mutate = > [" message ","\\ x " "%"] remove_field = > ["kafka"]} json {source = > "message" remove_field = > ["message"]} geoip {source = > "clientRealIp"} urldecode {all_fields = > true}} output {elasticsearch {hosts = > ["192.168.188.191json 9200", "192.168.188.192 source 9200", "192.168.188.193json 9200", "192.168.188.194pur9200" "192.168.188.195usr/share/logstash/templates/nginx_template" index = > "filebeat-% {type} -% {+ YYYY.MM.dd}" manage_template = > true template_overwrite = > true template_name = > "nginx_template" template = > "/ usr/share/logstash/templates/nginx_template" flush_size = > 50000 idle_flush_time = > 10}}

Nginx template

[root@logstash297 logstash] # cat / usr/share/logstash/templates/nginx_template

{"template": "filebeat-*", "settings": {"index.refresh_interval": "10s"}, "mappings": {"_ default_": {"_ all": {"enabled": true, "omit_norms": true} "dynamic_templates": [{"string_fields": {"match_pattern": "regex", "match": "(agent) | (status) | (url) | (referrer) | (upstreamhost) | (http_host) | (request) | (request_method) | (upstreamstatus)", "match_mapping_type": "string" "mapping": {"type": "string", "index": "analyzed", "omit_norms": true, "fields": {"raw": {"type": "string", "index": "not_analyzed" "ignore_above": 512}] "properties": {"@ version": {"type": "string", "index": "not_analyzed"}, "geoip": {"type": "object", "dynamic": true "properties": {"location": {"type": "geo_point"}

Start

/ usr/share/logstash/bin/logstash-f / usr/share/logstash/config/conf.d/in-kafka-ngx1-out-es.conf &

Default logstash boot

Referenc

/ usr/share/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-kafka-5.1.5/DEVELOPER.md

Error reporting processing

[2017-05-08T12:24:30388] [ERROR] [logstash.inputs.kafka] Unknown setting 'zk_connect' for kafka

[2017-05-08T12:24:30390] [ERROR] [logstash.inputs.kafka] Unknown setting 'topic_id' for kafka

[2017-05-08T12:24:30390] [ERROR] [logstash.inputs.kafka] Unknown setting 'reset_beginning' for kafka

[2017-05-08T12:24:30395] [ERROR] [logstash.agent] Cannot load an invalid configuration {: reason= > "Something is wrong with your configuration."}

Verify Lo

[root@logstash297 conf.d] # cat / var/log/logstash/logstash-plain.log

[2017-05-09T10:43:20832] [INFO] [logstash.outputs.elasticsearch] Elasticsearch pool URLs updated {: changes= > {: removed= > [],: added= > [http://192.168.188.191:9200/, http://192.168.188.192:9200/, http://192.168.188.193:9200/, http://192.168.188.194:9200/, http://192.168.188.195:9200/, Http://192.168.188.196:9200/]}}[2017-05-09T10:43:20,838][INFO] [logstash.outputs.elasticsearch] Running healthcheck to see if an Elasticsearch connection is working {: healthcheck_url= > http://192.168.188.191:9200/, : path= > "/"} [2017-05-09T10:43:20919] [WARN] [logstash.outputs.elasticsearch] Restored connection to ES instance {: url= > #} [2017-05-09T10:43:20920] [INFO] [logstash.outputs.elasticsearch] Running healthcheck to see if an Elasticsearch connection is working {: healthcheck_url= > http://192.168.188.192:9200/, : path= > "/"} [2017-05-09T10:43:20922] [WARN] [logstash.outputs.elasticsearch] Restored connection to ES instance {: url= > #} [2017-05-09T10:43:20924] [INFO] [logstash.outputs.elasticsearch] Running healthcheck to see if an Elasticsearch connection is working {: healthcheck_url= > http://192.168.188.193:9200/, : path= > "/"} [2017-05-09T10:43:20927] [WARN] [logstash.outputs.elasticsearch] Restored connection to ES instance {: url= > #} [2017-05-09T10:43:20927] [INFO] [logstash.outputs.elasticsearch] Running healthcheck to see if an Elasticsearch connection is working {: healthcheck_url= > http://192.168.188.194:9200/, : path= > "/"} [2017-05-09T10:43:20929] [WARN] [logstash.outputs.elasticsearch] Restored connection to ES instance {: url= > #} [2017-05-09T10:43:20930] [INFO] [logstash.outputs.elasticsearch] Running healthcheck to see if an Elasticsearch connection is working {: healthcheck_url= > http://192.168.188.195:9200/, : path= > "/"} [2017-05-09T10:43:20932] [WARN] [logstash.outputs.elasticsearch] Restored connection to ES instance {: url= > #} [2017-05-09T10:43:20933] [INFO] [logstash.outputs.elasticsearch] Running healthcheck to see if an Elasticsearch connection is working {: healthcheck_url= > http://192.168.188.196:9200/, : path= > "/"} [2017-05-09T10:43:20935] [WARN] [logstash.outputs.elasticsearch] Restored connection to ES instance {: url= > #} [2017-05-09T10:43:20936] [INFO] [logstash.outputs.elasticsearch] Using mapping template from {: path= > "/ usr/share/logstash/templates/nginx_template"} [2017-05-09T10:43:20970] [INFO] [logstash.outputs.elasticsearch] Attempting to install template {: manage_template= > {"template" = > "filebeat-*" "settings" = > {"index.refresh_interval" = > "10s"}, "mappings" = > {"_ default_" = > {"_ all" = > {"enabled" = > true, "omit_norms" = > true}, "dynamic_templates" = > [{"string_fields" = > {"match_pattern" = > "regex", "match" = > "agent) | (status) | (url) | (clientRealIp) | (referrer) | (upstreamhost) | (http_host) | (request) | (request_method)" "match_mapping_type" = > "string", "mapping" > {"type" = > "string", "index" = > "analyzed", "omit_norms" = > true, "fields" = > {"raw" = > {"type" = > "string", "index" = > "not_analyzed" "ignore_above" = > 512} [2017-05-09T10:43:20974] [INFO] [logstash.outputs.elasticsearch] Installing elasticsearch template to _ template/nginx_template [2017-05-09T10:43:21009] [INFO] [logstash.outputs.elasticsearch] New Elasticsearch output {: class= > "LogStash::Outputs::ElasticSearch",: hosts= > [#, # #]} [2017-05-09T10:43:21010] [INFO] [logstash.filters.geoip] Using geoip database {: path= > "/ usr/share/logstash/vendor/bundle/jruby/1.9/gems/logstash-filter-geoip-4.0.4-java/vendor/GeoLite2-City.mmdb"} [2017-05-09T10:43:21022] [INFO] [logstash.pipeline] Starting pipeline {"id" = > "main", "pipeline.workers" = > 4, "pipeline.batch.size" = > 125i "pipeline.batch.delay" = > 5, "pipeline.max_inflight" = > 500} [2017-05-09T10:43:21037] [INFO] [logstash.pipeline] Pipeline main started [2017-05-09T10:43:21086] [INFO] [logstash.agent] Successfully started Logstash API endpoint {: port= > 9600}

Sixth, deploy and configure filebeat

Installation

Rpm-ivh filebeat-5.2.2-x86_64.rpm

The nginx log format needs to be json

Log_format access'{"@ timestamp": "$time_iso8601",''"clientRealIp": "$clientRealIp",''"size": $body_bytes_sent,''"request": "$request", "method": "$request_method" '' "responsetime": $request_time,''"upstreamhost": "$upstream_addr",''"http_host": "$host",'"url": "$uri",''"referrer": "$http_referer" '' "agent": "$http_user_agent", "status": "$status"}'

Configure filebeat

Vim / etc/filebeat/filebeat.yml

Filebeat.prospectors:- input_type: log paths:-/ data/wwwlogs/*.log document_type: ngx1-168tail_files: true json.keys_under_root: true json.add_error_key: trueoutput.kafka: enabled: true hosts: [192.168.188.237 tail_files 9092 "," 192.168.188.238 tail_files 9092 " "192.168.188.239 partition.round_robin 9092"] topic:'% {[type]} 'partition.round_robin: reachable_only: false required_acks: 1 compression: gzip max_message_bytes: 1000000 worker: 3processorspartition.round_robin-drop_fields: fields: ["input_type", "beat.hostname", "beat.name", "beat.version", "offset" "source"] logging.to_files: truelogging.files: path: / var/log/filebeat name: filebeat rotateeverybytes: 10485760 # = 10MB keepfiles: 7

Filebeat detailed configuration reference official website

Https://www.elastic.co/guide/en/beats/filebeat/5.2/index.html

Kafka is used as log output.

Https://www.elastic.co/guide/en/beats/filebeat/5.2/kafka-output.html

Output.kafka:

# initial brokers for reading cluster metadata

Hosts: ["kafka1:9092", "kafka2:9092", "kafka3:9092"]

# message topic selection + partitioning

Topic:'% {[type]}'

Partition.round_robin:

Reachable_only: false

Required_acks: 1

Compression: gzip

Max_message_bytes: 1000000

Start

Chkconfig filebeat on

/ etc/init.d/filebeat start

Error reporting processing

[root@localhost] # tail-f / var/log/filebeat/filebeat

2017-05-09T15:21:39+08:00 ERR Error decoding JSON: invalid character'x'in string escape code

You can use $uri to change or rewrite URL in nginx, but you can use $request_uri instead for log output. If there are no special business requirements, it can be replaced completely.

Referenc

Http://www.mamicode.com/info-detail-1368765.html

Seven, verification

1Jing Kafka consumer view

/ usr/local/kafka/bin/kafka-console-consumer.sh-- zookeeper localhost:2181-- topic ngx1-168

2Die Elasticserch head to view Index and shard information

Eighth, deploy and configure kibana

1, configuration startup

Cat / etc/kibana/kibana.yml

Server.port: 5601

Server.host: "192.168.188.191"

Elasticsearch.url: "http://192.168.188.191:9200"

Chkconfig-add kibana

/ etc/init.d/kibana start

2, field format

{"_ index": "filebeat-ngx1-1682017.05.10", "_ type": "ngx1-168"," _ id ":" AVvvtIJVy6ssC9hG9dKY "," _ score ": null," _ source ": {" request ":" GET / qiche/ Audi A3 / HTTP/1.1 "," agent ":" Mozilla/5.0 (Windows NT 6.1) WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/45.0.2454.101 Safari/537.36, "geoip": {"city_name": "Jinhua", "timezone": "Asia/Shanghai", "ip": "122.226.77.150", "latitude": 29.1068, "country_code2": "CN", "country_name": "China" "continent_code": "AS", "country_code3": "CN", "region_name": "Zhejiang", "location": [119.6442, 29.1068], "longitude": 119.6442, "region_code": "33"}, "method": "GET", "type": "ngx1-168" "http_host": "www.niubi.com", "url": "/ qiche/ Audi A3 /", "referrer": "http://www.niubi.com/qiche/ Audi S6 /", "upstreamhost": "172.17.4.205 version 80", "@ timestamp": "2017-05-10T08:14:00.000Z", "size": 10027, "beat": {}, "@ version": "1" "responsetime": 0.217, "clientRealIp": "122.226.77.150", "status": "200"}," fields ": {" @ timestamp ": [149440404040000]}," sort ": [149440404040000]}

3, view dashboard

1), add Amap

Edit the kibana configuration file kibana.yml and add it at last

Tilemap.url: 'http://webrd02.is.autonavi.com/appmaptile?lang=zh_cn&size=1&scale=1&style=7&x={x}&y={y}&z={z}'

ES template adjustment, Geo-points does not apply to dynamic mapping, so such items need to be explicitly specified:

If you need to specify geoip.location as the geo_point type, add an item to the properties of the template, as follows:

"properties": {

"@ version": {"type": "string", "index": "not_analyzed"}

"geoip": {

"type": "object"

"dynamic": true

"properties": {

"location": {"type": "geo_point"}

}

}

}

4. Install the x-pack plug-in

Referenc

Https://www.elastic.co/guide/en/x-pack/5.2/installing-xpack.html#xpack-installing-offline

Https://www.elastic.co/guide/en/x-pack/5.2/setting-up-authentication.html#built-in-users

Be careful to change the password

Http://192.168.188.215:5601/app/kibana#/dev_tools/console?load_from=https://www.elastic.co/guide/en/x-pack/5.2/snippets/setting-up-authentication/1.json

Http://192.168.188.215:5601/app/kibana#/dev_tools/console?load_from=https://www.elastic.co/guide/en/x-pack/5.2/snippets/setting-up-authentication/2.json

Http://192.168.188.215:5601/app/kibana#/dev_tools/console?load_from=https://www.elastic.co/guide/en/x-pack/5.2/snippets/setting-up-authentication/3.json

Or

Curl-XPUT 'localhost:9200/_xpack/security/user/elastic/_password?pretty'-H' Content-Type: application/json'-d'

{

"password": "elasticpassword"

}

'

Curl-XPUT 'localhost:9200/_xpack/security/user/kibana/_password?pretty'-H' Content-Type: application/json'-d'

{

"password": "kibanapassword"

}

'

Curl-XPUT 'localhost:9200/_xpack/security/user/logstash_system/_password?pretty'-H' Content-Type: application/json'-d'

{

"password": "logstashpassword"

}

'

The following is the official website x-pack installation, upgrade and uninstall documents, after found that the registered version of x-pack, only has the monitoring function, not installed

Installing X-Pack on Offline MachinesThe plugin install scripts require direct Internet access to download and install X-Pack. If your server doesn't have Internet access, you can manually download and install X-Pack.To install X-Pack on a machine that doesn't have Internet access:Manually download the X-Pack zip file: https://artifacts.elastic.co/downloads/packs/x-pack/x-pack-5.2.2.zip (sha1) Transfer the zip file to a temporary directory on the offline machine. (Do NOT put the file in the Elasticsearch plugins directory.) Run bin/elasticsearch-plugin install from the Elasticsearch install directory and specify the location of the X-Pack zip file For example:bin/elasticsearch-plugin install file:///path/to/file/x-pack-5.2.2.zipNoteYou must specify an absolute path to the zip file after the file:// protocol.Run bin/kibana-plugin install from the Kibana install directory and specify the location of the X-Pack zip file. (The plugins for Elasticsearch, Kibana, and Logstash are included in the same zip file.) For example:bin/kibana-plugin install file:///path/to/file/x-pack-5.2.2.zipRun bin/logstash-plugin install from the Logstash install directory and specify the location of the X-Pack zip file. (The plugins for Elasticsearch, Kibana, and Logstash are included in the same zip file.) For example:bin/logstash-plugin install file:///path/to/file/x-pack-5.2.2.zipEnabling and Disabling X-Pack FeaturesBy default, all X-Pack features are enabled. You can explicitly enable or disable X-Pack features in elasticsearch.yml and kibana.yml:SettingDescriptionxpack.security.enabledSet to false to disable X-Pack security. Configure in both elasticsearch.yml and kibana.yml.xpack.monitoring.enabledSet to false to disable X-Pack monitoring. Configure in both elasticsearch.yml and kibana.yml.xpack.graph.enabledSet to false to disable X-Pack graph. Configure in both elasticsearch.yml and kibana.yml.xpack.watcher.enabledSet to false to disable Watcher. Configure in elasticsearch.yml only.xpack.reporting.enabledSet to false to disable X-Pack reporting. Configure in kibana.yml only.

IX. Nginx load balancing

1, configure the load

[root@~# cat / usr/local/nginx/conf/nginx.conf

Server

{listen 5601; server_name 192.168.188.215; index index.html index.htm index.shtml; location / {allow 192.168.188.0 upgrade'; proxy_set_header Host 24; deny all; proxy_pass http://kibanangx_niubi_com; proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection 'upgrade'; proxy_set_header Host $host Proxy_cache_bypass $http_upgrade; auth_basic "Please input Username and Password"; auth_basic_user_file / usr/local/nginx/conf/.pass_file_elk;} access_log / data/wwwlogs/access_kibanangx.niubi.com.log access;} upstream kibanangx_niubi_com {ip_hash; server 192.168.188.191 server 192.168.188.19301 Server 192.168.188.1955601;}

2, visit

Http://192.168.188.215:5601/app/kibana#

-

A perfect dividing line

-

Optimize the document

ELKB5.2 Cluster Optimization Scheme

First, optimize the effect.

Before optimization

The request for collecting logs reaches 10,000 / s, and the delay is less than 10 seconds. By default, the data is refreshed for 10 seconds.

After optimization

The request for collecting logs reaches 30,000 / s, and the delay is less than 10 seconds. By default, the data is refreshed for 10 seconds. (it is estimated that the maximum request can be met by 50,000 / s)

Disadvantages: CPU processing capacity is insufficient, in dashboard large-time aggregation operation is to generate instrument view will have timeout phenomenon; in addition, there is room for further optimization of elasticsarch structure and search syntax.

Second, optimization steps

1, memory and CPU re-planning

1), es 16CPU 48g memory

2), kafka 8CPU 16GB memory

3), logstash 16CPU 12GB memory

2pr Kafka optimization

Kafka manager monitoring and observation of consumption

Kafka heap size needs to be modified

Logstash involves a parameter modification of kafka

1), modify the number of jvm memory

Vi / usr/local/kafka/bin/kafka-server-start.sh

If ["x$KAFKA_HEAP_OPTS" = "x"]; then

Export KAFKA_HEAP_OPTS= "- Xmx8G-Xms8G"

Export JMX_PORT= "8999"

Fi

2), Broker parameter configuration

Configuration optimization is to modify the parameter values in the server.properties file.

Optimization of network and io operation thread configuration

# maximum number of threads for broker to process messages (default 3, can be the number of CPU cores)

Num.network.threads=4

# number of threads broker processes disk IO (default 4, which can be about 2 times the number of CPU cores)

Num.io.threads=8

3), install kafka monitoring

/ data/scripts/kafka-manager-1.3.3.4/bin/kafka-manager

Http://192.168.188.215:8099/clusters/ngxlog/consumers

3pr logstah optimization

Logstas needs to modify 2 configuration files

1), modify the jvm parameter

Vi / usr/share/logstash/config/jvm.options

-Xms2g

-Xmx6g

2), modify logstash.yml

Vi / usr/share/logstash/config/logstash.yml

Path.data: / var/lib/logstash

Pipeline.workers: number of 16#cpu cores

Pipeline.output.workers: "this is equivalent to the number of workers in output elasticsearch

Pipeline.batch.size: 500 please fill in according to qps, pressure, etc.

Pipeline.batch.delay: 5

Path.config: / usr/share/logstash/config/conf.d

Path.logs: / var/log/logstash

3) modify the corresponding logstash.conf file

Input file

Vi / usr/share/logstash/config/in-kafka-ngx12-out-es.conf

Input {kafka {bootstrap_servers = > "192.168.188.237bootstrap_servers 9092192.168.188.238VO9092192.168.188.239VOV 9092" group_id = > "ngx1" topics = > ["ngx1-168"] codec = >" json "consumer_threads = > 3 auto_offset_reset = >" latest "# add this line # decorate_events = > # true remove}

Filter file

Filter {mutate {gsub = > ["message", "\ x", "%"] # this is an escape. The encryption method in url is different from that of request. It is used for the display of Chinese characters # remove_field = > ["kafka"]. After removing the default false of decorate events, the kafka. {} field is not added, and there is no need for remove.

Output file

Before modification

Flush_size = > 50000

Idle_flush_time = > 10

After modification

Collect 80,000 one-time outputs in 4 seconds

Flush_size = > 80000

Idle_flush_time = > 4

Logstash output after startup (pipeline.max_inflight is 80K)

[2017-05-16T10:07:02552] [logstash.pipeline] Starting pipeline {"id" = > "main", "pipeline.workers" = > 16, "pipeline.batch.size" = > 5000, "pipeline.batch.delay" = > 5, "pipeline.max_inflight" = > 80000} [2017-05-16T10:07:02553] [WARN] [logstash.pipeline] CAUTION: Recommended inflight events max exceeded! Logstash will run with up to 80000 events in memory in your current configuration. If your message sizes are large this may cause instability with the default heap size. Please consider setting a non-standard heap size, changing the batch size (currently 5000), or changing the number of pipeline workers (currently 16)

4 optimal ElasticSearch

1), modify jvm participation

Vi / etc/elasticsearch/jvm.options

Adjust to 24g, up to 50% of virtual machine memory

-Xms24g

-Xmx24g

2), modify the GC method (TBD, follow-up observation, it is not recommended to modify this parameter when it is uncertain)

The default GC used by elasticsearch is CMS GC

If your memory size exceeds 6 gigabytes of memory, the CMS is not good, and it is easy to have stop-the-world.

G1 GC is recommended

Comment out:

JAVA_OPTS= "$JAVA_OPTS-XX:+UseParNewGC"

JAVA_OPTS= "$JAVA_OPTS-XX:+UseConcMarkSweepGC"

JAVA_OPTS= "$JAVA_OPTS-XX:CMSInitiatingOccupancyFraction=75"

JAVA_OPTS= "$JAVA_OPTS-XX:+UseCMSInitiatingOccupancyOnly"

Modified to:

JAVA_OPTS= "$JAVA_OPTS-XX:+UseG1GC"

JAVA_OPTS= "$JAVA_OPTS-XX:MaxGCPauseMillis=200"

3) install elasticsearch cluster monitoring tool Cerebro

Https://github.com/lmenezes/cerebro

Cerebro is a third-party elasticsearch cluster management software that can easily check the cluster status:

Https://github.com/lmenezes/cerebro/releases/download/v0.6.5/cerebro-0.6.5.tgz

Access address after installation

Http://192.168.188.215:9000/

4), elasticsearch search parameter optimization (difficult problem)

Found that there is nothing to do, first of all, the default configuration has been very good, and then bulk, refresh and other configurations have been written.

5), elasticsarch cluster role optimization

Es191,es193,es195 is only master node + ingest node.

Es192,es194,es196 only acts as a data node (above, 2 virtual machines share a set of raid5 disks, if both data nodes perform poorly)

Add 2 more data nodes, so the aggregate computing performance is greatly improved.

5 optimization of filebeat

1) input in json format, so that logstash does not need dcode to relieve the pressure on the backend

Json.keys_under_root: true

Json.add_error_key: true

2) unnecessary fields for drop are as follows

Vim / etc/filebeat/filebeat.yml

Processors:

-drop_fields:

Fields: ["input_type", "beat.hostname", "beat.name", "beat.version", "offset", "source"]

3) schedule the task to delete the index

Index is retained for 5 days by default

Cat / data/scripts/delindex.sh

#! / bin/bashOLDDATE= `date-d-5days +% Y.m.m.d`echo $OLDDATEcurl-XDELETE http://192.168.188.193:9200/filebeat-ngx1-168-$OLDDATEcurl-XDELETE http://192.168.188.193:9200/filebeat-ngx2-178-$OLDDATEcurl-XDELETE http://192.168.188.193:9200/filebeat-ngx3-188-$OLDDATE so far, I believe you have a better understanding of "the deployment process of ELKB5.2.2 cluster environment" You might as well do it in practice. Here is the website, more related content can enter the relevant channels to inquire, follow us, continue to learn!

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report