Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Construction of ELK Log Analysis system

2025-02-27 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)06/02 Report--

I. Environmental preparation

1. Install the java environment:

Yum install java-1.8.0-openjdk*-y

two。 Add elk execution user:

Groupadd-g 77 elkuseradd-u 77-g elk-d / home/elk-s / bin/bash elk

3. Append the following to / etc/security/limits.conf:

Elk soft memlock unlimitedelk hard memlock unlimited* soft nofile 65536 * hard nofile 131072

4. Execution takes effect

Sysctl-p

5. Configure Hostnam

Hostnamectl set-hostname monitor-elkecho "10.135.3.135 monitor-elk" > > / etc/hosts

II. Service deployment

1. Server:

1) download the source code package related to ELK:

Wget "https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-5.2.2.tar.gz"wget" https://artifacts.elastic.co/downloads/logstash/logstash-5.2.2.tar.gz"wget "https://artifacts.elastic.co/downloads/kibana/kibana-5.2.2-linux-x86_64.tar.gz"wget" http://mirror.bit.edu.cn/apache/kafka/0.10.2.0/kafka_2. 12-0.10.2.0.tgz "wget" http://mirror.bit.edu.cn/apache/zookeeper/zookeeper-3.4.9/zookeeper-3.4.9.tar.gz"

2) create an elk directory, and extract the above source code package to this directory:

Mkdir / usr/local/elkmkdir-p / data/elasticsearch/chown-R elk.elk / data/elasticsearch/mkdir-p / data/ {kafka,zookeeper} mv logstash-5.2.2 logstash & & mv kibana-5.2.2-linux-x86_64 kibana & mv elasticsearch-5.2.2 elasticsearch & & mv filebeat-5.2.2-linux-x86_64 filebeat & & mv kafka_2.12-0.10.2.0 kafka & & mv zookeeper-3.4.9 zookeeperchown-R elk.elk / usr/local/elk/

The list of program directories is as follows:

3) modify the appropriate configuration files for the following programs

① kibana:

[root@monitor-elk ~] # cat / usr/local/elk/kibana/config/kibana.yml | grep-v "^ #\ | ^ $" server.host: "localhost" elasticsearch.url: "http://localhost:9200"elasticsearch.requestTimeout: 30000logging.dest: / data/elk/logs/kibana.log [root@monitor-elk ~] #

② elasticsearch:

[root@monitor-elk ~] # cat / usr/local/elk/elasticsearch/config/elasticsearch.yml | grep-v "^ #\ | ^ $" node.name: node01path.data: / data/elasticsearch/datapath.logs: / data/elk/logs/elasticsearchbootstrap.memory_lock: truenetwork.host: 127.0.0.1http.port: 9200 [root@monitor-elk ~] # / usr/local/elk/elasticsearch/config/jvm.options# modify the following parameter-Xms1g-Xmx1g

③ logstash:

[root@monitor-elk ~] # cat / usr/local/elk/logstash/config/logs.ymlinput {# use kafka data as the log data source kafka {bootstrap_servers = > ["127.0.0.1 kafka"] topics = > "beats" codec = > json}} filter {# filter data if the log data contains the IP address Will be discarded if [message] = ~ "123.151.4.10" {drop {}} # transcode to normal url coding For example, # urldecode {# all_fields = > true#} # nginx access # determine the incoming log type if [type] = = "hongbao-nginx-access" or [type] = = "pano-nginx-access" or [type] = = "logstash-nginx-access" {grok {# specify the custom grok expression path patterns_dir = > ". / patterns" # specify the custom log type The contents of the regular expression name resolution log Split into fields match = > {"message" = > "% {NGINXACCESS}"} # after parsing Remove the default message field remove_field = > ["message"]} # use the geoip library to resolve the IP address geoip {# specify the resolved field as the data source source = > "clientip" fields = > ["country_name", "ip", "region_name"]} date {# matches the time in the log content For example, 05/Jun/2017:03:54:01 + 0800 match = > ["timestamp" "dd/MMM/yyyy:HH:mm:ss Z"] # assigns the matching time to the @ timestamp field target = > "@ timestamp" remove_field = > ["timestamp"]} # tomcat access if [type] = = "hongbao-tomcat-access" or [type] = = "ljq-tomcat-access" {grok {patterns_dir = > ". / patterns" match = > {" Message "= >"% {TOMCATACCESS} "} remove_field = > [" message "]} geoip {source = >" clientip "fields = > [" country_name "" "ip", "region_name"]} date {match = > ["timestamp" "dd/MMM/yyyy:HH:mm:ss Z"] target = > "@ timestamp" remove_field = > ["timestamp"]}} # tomcat catalina if [type] = = "hongbao-tomcat-catalina" {grok {match = > {"message" = > "^ (?\ d {4} -\ d {2} -\ d {2}:\ d {2}:\ d {2} \ d {3}) (?\ w *) (?. +) "} remove_field = > [" message "]} date {match = > [" log_time "," yyyy-MM-dd HH:mm:ss " SSS "] target = >" @ timestamp "remove_field = > [" log_time "]}} output {# writes the resolution failed record to the specified file if" _ grokparsefailure "in [tags] {file {path = >" / data/elk/logs/grokparsefailure-% {[type]} -% {+ YYYY.MM} .log "}} # nginx access # output to different indexes of elasticsearch according to type log type if [type] = = "hongbao-nginx-access" {# output the processed results to elasticsearch elasticsearch {hosts = > ["127.0.0.1 elasticsearch 9200"] # specify the index index = > "hongbao-nginx-access-% {+ YYYY. MM.dd} "} if [type] = =" pano-nginx-access "{elasticsearch {hosts = > [" 127.0.0.1 type 9200 "] index = >" pano-nginx-access-% {+ YYYY.MM.dd} "} if [type] = =" logstash-nginx-access "{elasticsearch {hosts = > [" 127.0.0.1 tomcat access if 9200 "] index = >" logstash-nginx-access-% {+ YYYY.MM.dd} "} # tomcat access if [type] = =" hongbao-tomcat-access "{elasticsearch {hosts = > [" 127.0.0.1 YYYY.MM.dd 9200 "] index = >" hongbao-tomcat-access-% {+ YYYY.MM.dd} "} If [type] = = "ljq-tomcat-access" {elasticsearch {hosts = > ["127.0.0.1 hosts 9200"] index = > "ljq-tomcat-access-% {+ YYYY.MM.dd}"} # tomcat catalina if [type] = = "hongbao-tomcat-catalina" {elasticsearch {hosts = > ["127.0.0.1 elasticsearch 9200"] Index = > "hongbao-tomcat-catalina-% {+ YYYY.MM.dd}"}} [root@monitor-elk ~] # configure the regular expression [root@monitor-elk ~] # cp / usr/local/elk/logstash/vendor/bundle/jruby/1.9/gems/logstash-patterns-core-4.0.2/patterns/grok-patterns / usr/local/elk/logstash/config/patterns [root@monitor-elk ~] # tail- 5 / usr/local/elk/logstash/config/patterns# NginxNGINXACCESS% {COMBINEDAPACHELOG}% {QS:x_forwarded_for} # TomcatTOMCATACCESS% {COMMONAPACHELOG} [root@monitor-elk ~] # chown elk.elk / usr/local/elk/logstash/config/patterns

4) configure zookeeper:

Cp / usr/local/elk/zookeeper/conf/zoo_sample.cfg / usr/local/elk/zookeeper/conf/zoo.cfg

Modify the data storage path in the configuration file

Vim / usr/local/elk/zookeeper/conf/zoo.cfgdataDir=/data/zookeeper

Back up and modify the script / usr/local/elk/zookeeper/bin/zkEnv.sh

Modify the parameters of the following variables

ZOO_LOG_DIR= "/ data/zookeeper-logs" ZOO_LOG4J_PROP= "INFO,ROLLINGFILE"

Back up and modify the log configuration / usr/local/elk/zookeeper/conf/log4j.properties

Modify the parameters of the following variables

Zookeeper.root.logger=INFO, ROLLINGFILElog4j.appender.ROLLINGFILE=org.apache.log4j.DailyRollingFileAppender# rotates logs every day

Start zookeeper:

/ usr/local/elk/zookeeper/bin/zkServer.sh start

5) configure kafka:

Modify the following parameters of the configuration file / usr/local/elk/kafka/config/server.properties

Log.dirs=/data/kafkazookeeper.connect=localhost:2181

Back up and modify the script / usr/local/elk/kafka/bin/kafka-run-class.sh

Append the LOG_DIR variable to the next line of "base_dir=$ (dirname $0) /.." and specify the log output path

LOG_DIR=/data/kafka-logs

Create a log storage directory:

Mkdir-p / data/kafka-logsmkdir-p / data/elk/logschown-R elk.elk / data/elk/logs

Start kafka:

Nohup / usr/local/elk/kafka/bin/kafka-server-start.sh / usr/local/elk/kafka/config/server.properties & > > / data/elk/logs/kafka.log &

It should be noted that the hostname must be configured in the / etc/hosts file, otherwise kafka will fail to start

[root@monitor-elk ~] # cat / etc/hosts127.0.0.1 localhost localhost.localdomain::1 localhost localhost.localdomain localhost6 localhost6.localdomain610.135.3.135 monitor-elk

6) configure supervisor

① installation supervisor:

Yum install supervisor-y

Set the service to boot automatically (the server program will also be started):

Systemctl enable supervisord.service

② modifies configuration

a. Create a log storage path:

Mkdir-p / data/supervisorchown-R elk.elk / data/supervisor/

b. Modify the main configuration file / etc/supervisord.conf

Logfile=/data/supervisor/supervisord.log

c. Create the supervisor configuration file for the elk program and add the following configuration:

[root@monitor-elk ~] # cat / etc/supervisord.d/elk.ini [program:elasticsearch] directory=/usr/local/elk/elasticsearchcommand=su-c "/ usr/local/elk/elasticsearch/bin/elasticsearch" elkautostart=truestartsecs=5autorestart=truestartretries=3priority= 10 [program: logstash] directory=/usr/local/elk/logstashcommand=/usr/local/elk/logstash/bin/logstash-f / usr/local/elk/logstash/config/logs.ymluser=elkautostart=truestartsecs=5autorestart=truestartretries=3redirect_stderr=truestdout_logfile=/data/elk/logs/logstash.logstdout_logfile_maxbytes=1024MBstdout_logfile_backups=10priority= 11 [program: kibana] Directory=/usr/local/elk/kibanacommand=/usr/local/elk/kibana/bin/kibanauser=elkautostart=truestartsecs=5autorestart=truestartretries=3priority=12 [root@monitor-elk ~] #

③ starts supervisor:

Systemctl start supervisord

View program processes and logs:

Ps aux | grep-v grep | grep "elasticsearch\ | logstash\ | kibana"

Tip:

Restart a single configured program, such as:

Supervisorctl restart logstash

Restart all configured programs:

Supervisorctl restart all

Reload the configuration (only restart the corresponding programs with configuration changes, and do not restart other programs with unchanged configurations):

Supervisorctl update

7) configure nginx

① install nginx

Yum install nginx-y

② configure nginx proxy:

[root@monitor-elk ~] # cat / etc/nginx/conf.d/kibana.conf upstream kibana {server 127.0.0.1 cat 5601 max_fails=3 fail_timeout=30s;} server {listen 8080; server_name localhost; location / {proxy_pass http://kibana/; index index.html index.htm; # auth auth_basic "kibana Private"; auth_basic_user_file / etc/nginx/.htpasswd }} [root@monitor-elk ~] # grep listen / etc/nginx/nginx.conflisten 8000 default_server;listen [:]: 8000 default_server; [root@monitor-elk ~] #

③ creates nginx authentication:

[root@monitor-elk ~] # yum install httpd-y [root@monitor-elk ~] # htpasswd-cm / etc/nginx/.htpasswd elkNew password: Re-type new password: Adding password for user elk [root@monitor-elk ~] # systemctl start nginx [root@monitor-elk ~] # systemctl enable nginx

8) configure ik Chinese word segmentation:

① installation maven:

Wget "http://mirror.bit.edu.cn/apache/maven/maven-3/3.3.9/binaries/apache-maven-3.3.9-bin.tar.gz"tar-zxf apache-maven-3.3.9-bin.tar.gzmv apache-maven-3.3.9 / usr/local/mavenecho" export MAVEN_HOME=/usr/local/maven "> / etc/bashrcecho" export PATH=$PATH:$MAVEN_HOME/bin "> > / etc/bashrc. / etc/bashrc

② compiles and installs ik (please download the corresponding version):

Wget "https://github.com/medcl/elasticsearch-analysis-ik/archive/v5.2.2.zip"unzip v5.2.2.zipcd elasticsearch-analysis-ik-5.2.2/mvn packagemkdir / usr/local/elk/elasticsearch/plugins/ikcp target/releases/elasticsearch-analysis-ik-5.2.2.zip / usr/local/elk/elasticsearch/plugins/ik/cd / usr/local/elk/elasticsearch/plugins/ik/unzip elasticsearch-analysis-ik-5.2.2.zip Rm-f elasticsearch-analysis-ik-5.2.2.zipchown-R elk.elk.. / iksupervisorctl restart elasticsearch

③ creates an index template:

To use ik participle, you need to create an index template before creating the specified index (either manually or through logstash configuration), otherwise you can use the default template:

Cd / usr/local/elk/logstash

Create and edit the file logstash.json, adding the following:

{"order": 1, "template": "tomcatcat-*", "settings": {"index": {"refresh_interval": "5s"}} "mappings": {"_ default_": {"dynamic_templates": [{"string_fields": {"mapping": {"norms": false, "type": "text", "analyzer": "ik_max_word" "search_analyzer": "ik_max_word"}, "match_mapping_type": "text", "match": "*"}}], "_ all": {"norms": false, "enabled": true} "properties": {"@ timestamp": {"include_in_all": false, "type": "date"}, "log_data": {"include_in_all": true, "type": "text", "analyzer": "ik_max_word" "search_analyzer": "ik_max_word", "boost": 8}, "@ version": {"include_in_all": false, "type": "keyword"}, "aliases": {}}'

After adding, execute the curl command to create the index template

Curl-XPUT 'http://localhost:9200/_template/tomcatcat'-d @ logstash.json

The result {"acknowledged": true} will be returned after successful execution.

④ hot update configuration:

Some words ik cannot recognize participle, such as company name, service name, etc.

Curl-XGET 'http://localhost:9200/_analyze?pretty&analyzer=ik_smart'-d' Tencent Cloud'

At this point, you need to customize your own thesaurus. Ik supports hot updating of word segmentation (no need to restart elasticsearch), and automatically detects it every minute.

Create a text file ik.txt in utf8 format under the nginx root path, and write the words you need to segment into ik.txt, one word per line:

Then modify / usr/local/elk/elasticsearch/plugins/ik/config/IKAnalyzer.cfg.xml

Http://127.0.0.1:8000/ik.txt

Restart elasticsearch after configuration, and get the word segmentation result again:

two。 Client:

1) download filebeat:

Wget "https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-5.2.2-linux-x86_64.tar.gz"

Extract filebeat-5.2.2-linux-x86_64.tar.gz to the / usr/local/elk/ directory and rename it to filebeat

Mkdir / usr/local/elk/mkdir-p / data/elk/logs/echo "10.135.3.135 elk" > > / etc/hosts

2) configure filebeat:

[root@test2 filebeat] # cat logs.ymlfilebeat.prospectors:-# specifies the path of the log file to be monitored. You can use * match paths:- / data/nginx/log/*_access.log# to specify the input type of the file as log (default) input_type: log# set the log type document_type: pano-nginx-access# to monitor the new content of the file from the end of the file And send tail_files: true# to kafkaoutput.kafka:hosts: ["10.135.3.135VO9092"] topic: beatscompression: Snappy [root@test2 filebeat] # [root@test3 filebeat] # cat logs.ymlfilebeat.prospectors:- paths:-/ usr/local/tomcat/logs/*access_log.*.txt input_type: log document_type: hongbao-tomcat-access tail_files: true- paths:-/ usr / local/tomcat/logs/catalina.out input_type: log document_type: hongbao-tomcat-catalina # Multiline matching pattern This is followed by a regular expression, which indicates the matching time, for example, 2017-06-05 10 multiline.pattern:'^\ d {4} -\ d {2} -\ d {2}\ d {2}:\ d {2}:\ d {2},\ d {3}'# merge the unmatched lines to the previous line For example, java's error log multiline.negate: true # adds unmatched lines to the end of the previous line multiline.match: after tail_files: trueoutput.kafka:hosts: ["10.135.3.135pur9092"] topic: beatscompression: Snappy [root@test3 filebeat] #

3) start filebeat

Nohup / usr/local/elk/filebeat/filebeat-e-c / usr/local/elk/filebeat/logs.yml-d "publish" & > > / data/elk/logs/filebeat.log &

III. Kibana web side configuration

1. The browser accesses the kibana address and enters the account password set by nginx earlier:

Http://10.135.3.135:8080

When accessing Kibana, the Discover page is loaded by default and the default index mode (logstash-*) is selected. Time filter (time filter) defaults to last 15 minutes (last 15 minutes), and search queries default to match-all (*).

Server resource status page:

Http://10.135.3.135:8080/status

two。 Build indexing mode

Note that the name of the index schema matches the index generated by logstash's output (that is, it must exist in Elasticsearch and must contain data), such as logstash-* can match logstash-20170330, as well as multiple indexes (all indexes that start with logstash-).

* match zero or more characters in the index name

3. After the index is established, click the index mode in Discover to see the log data of Elasticsearch.

4. Create a visual chart

Draw a visual chart, aggregate and display the field response status codes in the split nginx or tomcat access log, and intuitively display the statistics of each status code (such as 200,400, etc.) in the form of a chart

1) Click Vertical Bar Charts (vertical bar chart) in Visualize

2) Select one of the index modes, such as pano-*

3) specify the terms (entry) aggregation through the field response.keyword, display the total data of the first five columns of status codes in the order from largest to smallest, and then click the Apply changes icon to take effect.

In the chart, the X axis shows the status code and the Y axis shows the total number of status codes.

4) finally, click Save in the upper right corner to save, and enter a visual name.

5. Create a dashboard

Visual objects of the same business or type can be displayed centrally in the same dashboard.

1) Click add to add visual objects to the dashboard

2) Click on the created visual object and it will be arranged in the window of the dashboard. Resize the window of its visual object appropriately.

3) after adding and adjusting, click Save in the upper right corner to save, and enter the name of a dashboard.

4) results displayed

IV. Service monitoring script

1. Server side

1) kafka

[root@monitor-elk ~] # cat / usr planner localUniverse scriptsUniverse monitorkafka.shrunken author:Ellen# describes:Check kafka program# version:v1.0# updated:20170407## # Configuration informationprogram_dir=/usr/local/elk/kafkalogfile=/usr/local/scripts/log/monitor_kafka.log# Check executed userif [`whoami`! = "root"] Thenecho "Please use root run scriptwriters!" exit 1fi# Check kafka programnum= `ps aux | grep-w $program_dir | grep-vw "grep\ | vim\ | mv\ | scp\ | dd\ | head\ | script\ | ls\ | sys_log\ | logger\ | tar\ | rsync\ | ssh" | wc-l`if [${num}-eq 0] Thenecho "[`date +'% F% T``] [CRITICAL] Kafka program dost not startalarm!" | tee-a $logfile# Send alarm information#cagent_tools is an alarm plug-in that comes with Tencent CVM, which can send SMS messages or mailbox alarms. If you don't need it, you can comment on / usr/bin/cagent_tools alarm "Kafka program dost not startbags!"echo" [`date +'% F% T``] [INFO] Begin start kafka program... "| tee-a $logfilenohup / usr/local/elk/kafka/bin/kafka-server-start.sh / usr/local/elk/kafka/config/server.properties & > > / data/elk/logs/kafka.log & if [$?-eq 0] Thenecho "[`date +'% F% T'`] [INFO] Kafka program start successful." | tee-a $logfile/usr/bin/cagent_tools alarm "Kafka program start successful" exit 0elseecho "[`date +''% F% T'`] [CRITICAL] Kafka program start failed questions!" | tee-a $logfile/usr/bin/cagent_tools alarm "Kafka program start failed requests for Please handle itinerary!" exit 6fielseecho "[`date +'% F% T'`] [INFO] Kafka program Is running... "| tee-a $logfileexit 0fi [root@monitor-elk ~] #

2) zookeeper

[root@monitor-elk ~] # cat / usr engine localUniqqscriptsUniverse monitorzookeeper.shrunken Greater BinderBinder BHAHUGULAR localUniverse # author:Ellen# describes:Check zookeeper program# version:v1.0# updated:20170407## # Configuration informationprogram_dir=/usr/local/elk/zookeeperlogfile=/usr/local/scripts/log/monitor_zookeeper.log# Check executed userif [`whoami`! = "root"] Thenecho "Please use root run scriptwriters!" exit 1fi# Check zookeeper programnum= `ps aux | grep-w $program_dir | grep-vw "grep\ | vim\ | vi\ | scp\ | cat\ | tail\ | head\ | ls\ | echo\ | tar\ | rsync\ | ssh" | wc-l`if [${num}-eq 0] Thenecho "[`date +'% F% T'`] [CRITICAL] Zookeeper program dost not startbags!" | tee-a $logfile# Send alarm information/usr/bin/cagent_tools alarm "Zookeeper program dost not startbacks!" echo "[`date +'% F% T'`] [INFO] Begin start zookeeper program..." | tee-a $logfile/usr/local/elk/zookeeper/bin/zkServer.sh startif [$?-eq 0] Thenecho "[`date +'% F% T'`] [INFO] Zookeeper program start successful." | tee-a $logfile/usr/bin/cagent_tools alarm "Zookeeper program start successful" exit 0elseecho "[`date +''% F% T'`] [CRITICAL] Zookeeper program start failed questions!" | tee-a $logfile/usr/bin/cagent_tools alarm "Zookeeper program start failed requests for Please handle itinerary!" exit 6fielseecho "[`date +'% F% T'`] [INFO] Zookeeper program Is running... "| tee-a $logfileexit 0fi [root@monitor-elk ~] #

3) add crontab scheduled tasks

0-59 usr/local/scripts/monitor_kafka.sh 5 * / usr/local/scripts/monitor_zookeeper.sh & > / dev/null0-59 max 5 * / dev/null

two。 Client:

[root@test2 ~] # cat / usr dictionary localUniverse scriptsUniverse monitorsand filebeat.shemaking accountBinder BHAHUGULAR BHANGULAR localcontrol # author:Ellen# describes:Check filebeat program# version:v1.0# updated:20170407## # Configuration informationprogram_dir=/usr/local/elk/filebeatlogfile=/usr/local/scripts/log/monitor_filebeat.log# Check executed userif [`whoami`! = "root"] Thenecho "Please use root run scriptwriters!" exit 1fi# Check filebeat programnum= `ps aux | grep-w $program_dir | grep-vw "grep\ | vim\ | mv\ | cp\ | cat\ | tail\ | head\ | script\ | echo\ | sys_log\ | logger\ | tar\ | rsync\ | ssh" | wc-l`if [${num}-eq 0] Thenecho "[`date +'% F% T'`] [CRITICAL] Filebeat program dost not startbags!" | tee-a $logfile# Send alarm information/usr/bin/cagent_tools alarm "Filebeat program dost not startbacks!" echo "[`date +'% F% T'`] [INFO] Begin start filebeat program..." | tee-a $logfilenohup / usr/local/elk/filebeat/filebeat-e-c / usr/local/elk/filebeat/logs.yml-d "publish" & > > / data / elk/logs/filebeat.log & if [$?-eq 0] Thenecho "[`date +'% F% T'`] [INFO] Filebeat program start successful." | tee-a $logfile/usr/bin/cagent_tools alarm "Filebeat program start successful" exit 0elseecho "[`date +''% F% T'`] [CRITICAL] Filebeat program start failed questions!" | tee-a $logfile/usr/bin/cagent_tools alarm "Filebeat program start failed requests for Please handle itinerary!" exit 6fielseecho "[`date +'% F% T'`] [INFO] Filebeat program Is running... "| tee-a $logfileexit 0fi [root@test2 ~] #

3) add crontab scheduled tasks

0-59 take 5 * / usr/local/scripts/monitor_filebeat.sh & > / dev/null

V. matters needing attention

1. Data flow direction

-

Log_files-> filebeat-> kafka- > logstash-> elasticsearch-> kibana

-

two。 Clean up the elasticsearch index regularly every day, keeping only the index within 30 days

1) scripting

[root@monitor-elk ~] # cat / usrAccord localUniqqScriptsUniverse delta index.shroud cards GreatBINGUBULAR BHAHUBHANGUBING # author:Ellen# describes:Delete elasticsearch history index.# version:v1.0# updated:20170407#### # Configuration informationlogfile=/usr/local/scripts/log/del_index.logtmpfile=/tmp/index.txthost=localhostport=9200deldate= `date-d'-30days' +'% Y.m.m.d' `# Check executed userif [`whoami`! = "root"] Thenecho "Please use root run scriptwriters!" exit 1fi# Delete elasticsearch indexcurl-s "$host:$port/_cat/indices?v" | grep-v health | awk {'print $3'} | grep "$deldate" > $tmpfileif [!-s $tmpfile]; thenecho "[`date +'% F% T'`] [WARNING] $tmpfile is an empty file." | tee-a $logfileexit 1fifor I in `cat/ tmp/ index.txt`docurl-XDELETE curl [$?-eq 0] Thenecho "[`date +'% F% T'`] [INFO] Elasticsearch index $i delete successful." | tee-a $logfileelseecho "[`date +''% F% T'`] [CRITICAL] Elasticsearch index $I delete failed failures!" | tee-a $logfile/usr/bin/cagent_tools alarm "Elasticsearch index $I delete failed failures!" exit 6fidone [root@monitor-elk ~] #

2) add crontab scheduled tasks

00 02 * / usr/local/scripts/del_index.sh & > / dev/null

3. Index by business

Such as hongbao, pano, etc.

Access logs such as 4.nginx and tomcat use the default format

VI. Relevant command reference

1. List all indexes

Curl-s' http://localhost:9200/_cat/indices?v'

two。 List nodes

Curl 'localhost:9200/_cat/nodes?v'

3. Query cluster health information

Curl 'localhost:9200/_cat/health?v'

4. View the specified index data (ten results are returned by default)

Curl-XGET 'http://localhost:9200/logstash-nginx-access-2017.05.20/_search?pretty'

5. Delete the specified index

Curl-XDELETE http://localhost:9200/logstash-nginx-access-2017.05.20

6. Query template

Curl-s' http://localhost:9200/_template'

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report