Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Deployment and practice of elk (elasticsearch, logstast,kibana) filebeat

2025-01-30 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)06/02 Report--

1. Elk description

Full name of elk:

Elasticsearch:

Is a distributed, highly scalable, high real-time search and data analysis engine; es for short

Logstash:

Is an open source server-side data processing pipeline that can simultaneously collect data from multiple sources, convert it, and then send it to your favorite "repository", such as elasticsearch

Kibana:

Is an open source analysis and visualization platform designed for Elasticsearch. You can use Kibana to search, view and interact with the data stored in the Elasticsearch index. You can easily implement advanced data analysis and visualization in the form of icons.

The above three components are often referred to as elk~

2. Rapid deployment and configuration of elk

1) deployment environment:

Centos7, this article is based on 7.x deployment

172.16.0.213 elasticsearch

172.16.0.217 elasticsearch

172.16.0.219 elasticsearch kibana

Kibana can be deployed on one of them.

2) configure the official yum source

All three are configured with repo feeds

$cat / etc/yum.repos.d/elast.repo [elasticsearch-7.x] name=Elasticsearch repository for 7.x packagesbaseurl= https://artifacts.elastic.co/packages/7.x/yumgpgcheck=1gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearchenabled=1autorefresh=1type=rpm-md

3) installation

$cat / etc/hosts

172.16.0.213 ickey-elk-213

172.16.0.217 ickey-elk-217

172.16.0.219 ickey-elk-219

$yum install elasticsearch-y

4) configuration

$cat / etc/elasticsearch/elasticsearch.ymlcluster.name: elk_test # Cluster name node.name: ickey-elk-217 # Node name needs to be configured by node: node.master: truenode.data: truepath.data: / var/log/elasticsearch/datapath.logs: / var/log/elasticsearch/logsnetwork.host: 172.16.0.217 # Node iptransport.tcp.port: 9300transport.tcp.compress: truehttp.port: 9200http.max_content_length: 100mbbootstrap .memory _ lock: truediscovery.seed_hosts: ["172.16.0.213" "172.16.0.217", "172.16.0.219"] cluster.initial_master_nodes: ["172.16.0.213", "172.16.0.217", "172.16.0.219"] gateway.recover_after_nodes: 2gateway.recover_after_time: 5mgateway.expected_nodes: 3

Modify the elasticsearch startup memory allocation:

$/ etc/elasticsearch/jvm.options

-Xms4g

-Xmx4g

The internal memory is generally about 80% of the system memory; it represents the preloaded memory and the maximum used memory, respectively.

Start elasticsearch at this time

$systemctl elasticsearch start

5) install kibana

Install it on 219.

$yum install kinbana-y

Configuration

$cat / etc/kibana/kibana.yml | egrep-v "(^ $| ^ #)" server.port: 5601server.host: "172.16.0.219" server.name: "ickey-elk-219" elasticsearch.hosts: ["http://172.16.0.213:9200","http://172.16.0.217:9200", "http://172.16.0.219:9200"]elasticsearch.username:" kibana "elasticsearch.password:" pass "elasticsearch.requestTimeout: 40000logging.dest: / var/log/kibana/kibana.log # log output Default output to / var/log/messagei18n.locale: "zh-CN" # Chinese interface

For more information on configuration, please see:

Https://www.elastic.co/guide/cn/kibana/current/settings.html

2. Installation, configuration and practice of logstash

The above storage and search es and display and search graphic kibana installation configuration is complete, and the data acquisition part requires logstash and beat. Here, logstash and filebeat are mainly used.

Lostash collection logs are relatively heavy, and the configuration is relatively complex. There are also many customizable collection features, which are common in addition to installation and configuration:

1) installation

Install via yum source, the installation source is the same as above

Yum install logstash-y

Logstash requires jdk support; therefore, you need to install and configure java jdk version 1.8 or above first.

Install jdk-8u211-linux-x64.rpm here

$cat / etc/profile.d/java.shxport JAVA_HOME=/usr/java/latestexport JAVA_BIN=$ {JAVA_HOME} / binexport PATH=$ {PATH}: ${JAVA_HOME} / binexport CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jarexport JAVA_HOME JAVA_BIN PATH CLASSPATHexport JRE_HOME=/usr/java/latest

/ usr/share/logstash/bin/system-install needs to be executed after the installation is completed

Centos 6's system manages services in the following ways

Initctl status | start | stop | restart logstash

CentOS7:

Systemctl restart logstash

2) practical configuration

Collect nginx logs: (execute on nginx server)

Cat / etc/logstash/conf.d/nginx-172.16.0.14.confinput {file {path = > ["/ var/log/nginx/test.log"] codec = > jsonsincedb_path = > "/ var/log/logstash/null" discover_interval = > 15stat_interval = > 1start_position = > "beginning"}} filter {date {locale = > "en" timezone = > "Asia/Shanghai" match = > ["timestamp", "ISO8601", "yyyy-MM-dd'T'HH:mm:ssZZ"]} mutate {convert = > ["upstreamtime" "float"]} mutate {gsub = > ["message", "\ x", "\ x"]} if [user_agent] {useragent {prefix = > "remote_" source = > "user_agent"}} if [request] {ruby {init = > "@ kname = ['method1','uri1'] 'verb'] "code = >" new_event = LogStash::Event.new (Hash [@ kname.zip (event.get (' request'). Split ('')]) new_event.remove ('@ timestamp') new_event.remove ('method1') event.append (new_event) "remove_field = > [" request "]}} geoip {source = >" clientRealIp "target = >" geoip "database = >" / tmp/GeoLite2-City.mmdb "add_field = > [geoip] [coordinates]" "% {[geoip] [longitude]}"] add_field = > ["[geoip] [coordinates]", "% {[geoip] [latitude]}"]} mutate {convert = > ["[geoip] [coordinates]", "float", "upstream_response_time", "float", "responsetime", "float", "body_bytes_sent", "integer", "bytes_sent" "integer"]} output {elasticsearch {hosts = > ["172.16.0.219 hosts 9200"] index = > "logstash-nginx-% {+ YYYY.MM.dd}" workers = > 1template_overwrite = > true}}

Note that the log format in nginx needs to be configured as follows:

Log_format logstash'{"@ timestamp": "$time_iso8601",''"@ version": "1",''"host": "$server_addr",''"size": $body_bytes_sent,'' "domain": "$host",''"method": "$request_method", "url": "$uri",''"request": "$request",''"status": "$status",''"referer": "$http_referer",'"user_agent": "$http_user_agent" "body_bytes_sent": "$body_bytes_sent", "bytes_sent": "$bytes_sent", "clientRealIp": "$clientRealIp", "forwarded_for": "$http_x_forwarded_for", "responsetime": "$request_time", "upstreamhost": "$upstream_addr", "upstream_response_time": "$upstream_response_time"}'

Configured to receive syslog

$cat / etc/logstash/conf.d/rsyslog-tcp.confinput {syslog {type = > "system-syslog" host = > "172.16.0.217" port = > 1514}} filter {if [type] = = "system-syslog" {grok {match = > {"message" = > "% {SYSLOGTIMESTAMP:syslog_timestamp}% {SYSLOGHOST:syslog_hostname}% {DATA:syslog_program} (?: [% {POSINT:syslog_pid}])?% {GREEDYDATA:syslog_message}"} add_field = > ["received_at" "% {@ timestamp}"] add_field = > ["received_from", "% {host}"]} date {match = > ["syslog_timestamp", "MMM d HH:mm:ss", "MMM dd HH:mm:ss"]} output {if [type] = = "system-syslog" {elasticsearch {hosts = > ["172.16.0.2received_from 179200"] index = > "logstash-% {type} -% {+ YYYY.MM.dd}" # workers = > 1template_overwrite = > true}

The client needs to configure:

$tail-fn 1 / etc/rsyslog.conf. @ 172.16.0.217purl 1514

Configure the collection hardware log server

[yunwei@ickey-elk-217 ~] $cat / etc/logstash/conf.d/hardware.conf

Input {syslog {type = > "hardware-syslog" host = > "172.16.0.217" port = > 514}} filter {if [type] = = "hardware-syslog" {grok {match = > {"message" = > "% {SYSLOGTIMESTAMP:syslog_timestamp}% {SYSLOGHOST:syslog_hostname}% {DATA:syslog_program} (?: [% {POSINT:syslog_pid}]):% {GREEDYDATA:syslog_message}"} add_field = > ["received_at" "% {@ timestamp}"] add_field = > ["received_from", "% {host}"]} date {match = > ["syslog_timestamp", "MMM d HH:mm:ss", "MMM dd HH:mm:ss"]}} output {if [type] = = "hardware-syslog" {elasticsearch {hosts = > ["172.16.0.2received_from 179200"] index = > "logstash-% {type} -% {+ YYYY.MM.dd}"}

3. Installation, configuration and application practice of filebeat

1) description

Filebeat was originally modified based on the source code of logstash-forwarder. In other words: filebeat is the new version of logstash-forwarder, and it will be the first choice for Elastic Stack on the shipper side.

The following figure is taken from the official relationship between es logstasilebeat.pngh filbeat kafa redis; as shown in the figure:

2) installation

Also based on the above yum source

$yum install filebeat-y

3) collect runtime and php-fpm error logs for configuration

[root@ickey-app-api-52 yunwei] # cat / etc/filebeat/filebeat.yml#== Filebeat inputs = = filebeat.inputs:- type: logenabled: truepaths:- / home/wwwroot/.ickey.cn/runtime/logs/.logfields:type: "runtime" json.message_key: logjson.keys_under_root: true- type: truepaths:- / var/log/php-fpm/www-error.logfields:type: "php-fpm" # = = Filebeat modules = = filebeat.config.modules:path : ${path.config} / modules.d/*.ymlreload.enabled: true#= Elasticsearch template setting = = setup.template.settings:index.number_of_shards: 2bicycle = Kibana = = setup.kibana:host: "172.16.0.219Kibana 5601" # = = Elastic Cloud = = output.elasticsearch:hosts: ["172.16.0.213pur9200" "172.16.0.217purl 9200" "172.16.0.219when.equals:fields.type 9200"] indices:- index: "php-fpm-log-% {+ yyyy.MM.dd}" when.equals:fields.type: "php-fpm"-index: "runtime-log-% {+ yyyy.MM.dd}" when.equals:fields.type: "runtime" pipelines:- pipeline: "php-error-pipeline" when.equals:fields.type: "php-fpm" # = Processors = = processors:- add_host_metadata: ~-add _ cloud_metadata: ~ # = = Logging = = logging.level: infologging.to_files: truelogging.files:path: / var/log/filebeatname: filebeatkeepfiles: 7permissions: 0644

Description:

The php-fpm error.log format is as follows:

[29-Oct-2019 11:33:01 PRC] PHP Fatal error: Call to a member function getBSECollection () on null in / var/html/wwwroot/framework/Excel5.php on line 917

Since we need to extract the time, PHP Fatal error and the number of lines that went wrong, collecting grok,filebeat in logstash needs to be processed through ingest. The process is like this: filebeat first gets the content and puts it on the logstash and outputs it as we want through ingest definition.

Therefore, you need to do the following on logstash:

[root@ickey-elk-213 ~] # cat phperror-pipeline.json {"description": "phperror log pipeline", "processors": [{"grok": {"field": "message", "patterns": "% {DATA:datatime} PHP. *:% {DATA:errorinfo} in% {DATA:error-url} on line% {NUMBER:error-line}"}]}

Application:

Curl-H 'Content-Type: application/json'-XPUT 'http://localhost:9200/_ingest/pipeline/php-error-pipeline'-d@phperror-pipeline.json query: curl-H' Content-Type: application/json'-GET 'http://localhost:9200/_ingest/pipeline/php-error-pipeline' Delete: curl-H' Content-Type: application/json'-XDELETE 'http://localhost:9200/_ingest/pipeline/php-error-pipeline'

Collect database logs:

Filebeat.inputs:- type: log paths:-/ var/log/mysql/mysql.err fields: type: "mysqlerr" exclude_files: ['Note'] multiline.pattern:' ^ [0-9] {4}. * 'multiline.negate: true multiline.match: afterfilebeat.config.modules: path: ${path.config} / modules.d/*.yml reload.enabled: Truesetup.template.settings: index.number_of_shards: 2setup.kibana: Host: "172.16.0.219 indices 5601" output.elasticsearch: hosts: ["172.16.0.213 hosts"] indices:-index: "mysql-err-% {+ yyyy.MM.dd}" when.equals: fields.type: "mysqlerr" processors:-add_host_metadata: ~-add_cloud_metadata: ~

4. Install and configure elasticsearch-head

Elasticsearch-head is open source, graphical viewing operation index web interface in es

1) installation

$git clone https://github.com/mobz/elasticsearch-head.git$ cd elasticsearch-head$ registry= https://registry.npm.taobao.org$ npm install grunt-save-- └─┬ grunt@1.0.1. Omit. ├── path-is-absolute@1.0.1 └── rimraf@2.2.8npm WARN elasticsearch-head@0.0.0 license should be a valid SPDX license expression$ npm install-- registry= https://registry.npm.taobao.orgnpm WARN deprecated http2@3.3.7: Use the built-in module in node 9.0.0 or newer, instead [.]-fetchMetadata: verb afterAdd / root/.npm/debug/2.6.9/package/package.json written

This step will take some time to wait.

2) configure self-booting service

$cat / usr description description: starts and stops the elasticsearch-headdata= "cd / usr/local/src/elasticsearch-head/" Nohup npm run start > / dev/null 2 > & 1 & "START () {eval $data & & echo-e" elasticsearch-head start\ 033 [32m ok\ 033 [0m "} STOP () {ps-ef | grep grunt | grep-v" grep "| awk'{print $2}'| xargs kill-s 9 > / dev/null & & echo-e" elasticsearch-head stop\ 033 [32m ok\ 033 [0m "} STATUS () {PID=$ (ps aux | grep grunt | grep-v grep | awk'{print $2}')} case" $1 "instart START;;stop STOP) Restart) STOPsleep 3STARTX *) echo "Usage: elasticsearch-head (start | stop | restart)";; esac

Visit:

Http://172.16.0.219:9100 is shown in the figure:

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report