In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-04-05 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)06/02 Report--
We mainly use ELK log analysis system to analyze Nginx access log, mysql slow query log, tomcat operation log and system log and so on.
Introduction:
ELK:ElasticSearch+LogStash+Kibana=ElkStack
ElasticSearch: storage, retrieval, analysis (can be replaced by solr)
LogStash: collector, input, processing analysis, storage to ES
Kibana: show
Note: ElasticSearch supports cluster function. After log collection, one copy will be stored on each node (optional)
1. Install jdk
Wget http://sg-new.oss-cn-hangzhou.aliyuncs.com/jdk1.8.0_102.tgz
Tar-zxvf jdk1.8.0_102.tgz-C / App/java
Vim / etc/profile
# set for java
Export JAVA_HOME=/App/java/jdk1.8.0_102
Export PATH=$JAVA_HOME/bin:$PATH
Export CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar
Export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/local/apr/lib
Source / etc/profile
Java-version
2. Download and install elasticsearch (distributed can be deployed), and launch
Rpm-- import https://packages.elastic.co/GPG-KEY-elasticsearch
Echo "
[elasticsearch-2.x]
Name=Elasticsearch repository for 2.x packages
Baseurl= http://packages.elastic.co/elasticsearch/2.x/centos
Gpgcheck=1
Gpgkey= http://packages.elastic.co/GPG-KEY-elasticsearch
Enabled=1 "> > / etc/yum.repos.d/elasticsearch.repo
Yum install elasticsearch-y
Mkdir / data/elk/ {data,logs}-p
Vi / etc/elasticsearch/elasticsearch.yml
Cluster.name: es # cluster name (a cluster must be the same name)
Node.name: es-node1 # Node name
Path.data: / data/elk/data
Path.logs: / data/elk/logs
Bootstrap.mlockall: true # is set to ture and memory is locked (do not interact with swap)
Network.host: 0.0.0.0
Http.port: 9200
# discovery.zen.ping.unicast.hosts: ["192.168.2.215", "host2"]
Start:
Pay attention to folder permissions before startup
/ etc/init.d/elasticsearch start
-
Test: you can visit: http://192.168.88.48:9200/ at this time
Access results:
{
"name": "Bombshell"
"cluster_name": "es"
"cluster_uuid": "Rueqwrx2TjaKp24QJDt4wg"
"version": {
"number": "2.4.5"
"build_hash": "c849dd13904f53e63e88efc33b2ceeda0b6a1276"
"build_timestamp": "2017-04-24T16:18:17Z"
"build_snapshot": false
"lucene_version": "5.5.4"
}
"tagline": "You Know, for Search"
}
3. Install the elasticsearch plug-in
Install the head plug-in (cluster management plug-in)
Cd / usr/share/elasticsearch/bin/
. / plugin install mobz/elasticsearch-head
Ll / usr/share/elasticsearch/plugins/head
Test the plug-in:
Http://192.168.88.48:9200/_plugin/head/
Install the plug-in kopf (cluster resource view monitoring and query plug-ins)
/ usr/share/elasticsearch/bin/plugin install lmenezes/elasticsearch-kopf
Http://192.168.88.48:9200/_plugin/kopf
Restart elasticearch
/ etc/init.d/elasticsearch restart
Key points:
If you do a cluster, other configurations are the same.
Mkdir / data/elk/ {data,logs}
Vi / etc/elasticsearch/elasticsearch.yml
Cluster.name: es # cluster name (a cluster must be the same name)
Node.name: es-node2 # Node name
Path.data: / data/elk/data
Path.logs: / data/elk/logs
Bootstrap.mlockall: true # is set to ture and memory is locked (do not interact with swap)
Network.host: 0.0.0.0
Http.port: 9200
# discovery.zen.ping.unicast.hosts: ["192.168.2.215", "host2"]
-
There is a problem of cluster connection failure (only one node, one missing), and a data is sliced into 5 parts.
Problem 1. Lock the memory, because it is an ordinary user, so there are restrictions on the use of memory
Vim / etc/security/limits.conf
Elasticsearch soft memlock unlimited
Elasticsearch hard memlock unlimited
Note that the number of files opened by the user is 65536.
Question 2: the VJ mode is multicast by default. There will be problems with connecting to the cluster and change it to unicast.
Discovery.zen.ping.multicast.enabled: false
Discovery.zen.ping.unicast.hosts: ["192.168.2.215", "host2"]
Question 3: permission problem
Chown-R elasticsearch:elasticsearch / data/elk/
The cluster function is complete at this time.
4. Install kibana
Wget https://download.elastic.co/kibana/kibana/kibana-4.5.1-linux-x64.tar.gz
Tar zxvf kibana-4.5.1-linux-x64.tar.gz
Mv kibana-4.5.1-linux-x64 / usr/local/kibana
Vi / etc/rc.local
/ usr/local/kibana/bin/kibana > / var/log/kibana.log 2 > & 1 &
Vi / usr/local/kibana/config/kibana.yml
Server.port: 5601
Server.host: "192.168.88.48"
Elasticsearch.url: "http://192.168.88.48:9200"
There is a line at the bottom of each version. Be sure to pay attention to it.
Start the service
/ usr/local/kibana/bin/kibana &
5. Install logstash
In logstash, there are three phases:
Enter input-- > process filter (not required)-- > output output
Rpm-- import https://packages.elastic.co/GPG-KEY-elasticsearch
Echo "
[logstash-2.1]
Name=Logstash repository for 2.1.x packages
Baseurl= http://packages.elastic.co/logstash/2.1/centos
Gpgcheck=1
Gpgkey= http://packages.elastic.co/GPG-KEY-elasticsearch
Enabled=1 "> > / etc/yum.repos.d/logstash.repo
Yum install logstash-y
Verify the input and output of Logstash through configuration
Test syntax:-e input command, foreground run
/ opt/logstash/bin/logstash-e 'input {stdin {}} output {stdout {codec= > rubydebug}}'
Enter my name is caicai. Enter
Test 1: test based on screen input, the same as above, except that the configuration uses the configuration file
Vim / etc/logstash/conf.d/stdout.conf
Input {
Stdin {}
}
Output {
Stdout {
Codec = > "rubydebug"
}
}
Launch: / opt/logstash/bin/logstash-f / etc/logstash/conf.d/stdout.conf
Test 2:logstash combined with es, data is written to es: pay attention to the port (not required in the old version)
Vim / etc/logstash/conf.d/stdout.conf
Input {
Stdin {}
}
Output {
Elasticsearch {
Hosts = > ["192.168.88.48 2IP 9200", "hosts Node 2IP"]
# protocol = > "http" (there is no protocol for this version)
}
Stdout {codec= > rubydebug} # No, standard output to the screen
}
At this point, you can see the index and specific data in http://192.168.88.48:9200/_plugin/head/
Test 3: collect system logs:
Input {
File {
Path = > "/ var/log/messages"
Type = > "system"
Start_position = > "beginning" # collect from scratch
}
}
Output {
Elasticsearch {
Hosts = > ["192.168.88.48 virtual 9200"]
Index = > "system-% {+ YYYY.MM.dd}" # specify an index
}
}
Test 4: collect the java exception log, and then the above, make a judgment
Input {
File {
Path = > "/ var/log/messages"
Type = > "system"
Start_position = > "beginning" # collect from scratch
}
File {
Path = > "/ logs/java/java.log"
Type = > "es-error"
Start_position = > "beginning" # collect from scratch
Codec = > multilinc {# default collection is one event per line, which is merged into one event after adding parameters
Pattern = > "^\ [" # delimiter
Negate = > ture
What = > "previous" # merge the previous line
}
}
}
Output {
If [type] = = "system" {
Elasticsearch {
Hosts = > ["192.168.88.48 virtual 9200"]
Index = > "system-% {+ YYYY.MM.dd}" # specify an index
}
}
If [type] = = "es-error" {
Elasticsearch {
Hosts = > ["192.168.88.48 virtual 9200"]
Index = > "es-error-% {+ YYYY.MM.dd}" # specify an index
}
}
}
Test 5: collect nginx logs
Log_format json'{"@ timestamp": "$time_iso8601",'
'@ version ":" 1 ",'
'"client": "$remote_addr",'
'"url": "$uri",'
'"status": $status,'
'"domain": "$host",'
'"host": "$server_addr"'
'"size": $body_bytes_sent,'
'"responsetime": "$request_time",'
'"referer": "$http_referer",'
'"ua": "$http_user_agent"'
'}'
Input {
File {
Path = > "/ var/log/messages"
Type = > "system"
Start_position = > "beginning" # collect from scratch
}
File {
Path = > "/ logs/nginx/lux.cngold.org.access.log"
Codec = > "json"
Start_position = > "beginning" # collect from scratch
Type = > "nginx-log"
}
File {
Path = > "/ logs/java/java.log"
Type = > "es-error"
Start_position = > "beginning" # collect from scratch
Codec = > multilinc {# default collection is one event per line, which is merged into one event after adding parameters
Pattern = > "^\ [" # delimiter
Negate = > ture
What = > "previous" # merge the previous line
}
}
}
Output {
If [type] = = "system" {
Elasticsearch {
Hosts = > ["192.168.88.48 virtual 9200"]
Index = > "system-% {+ YYYY.MM.dd}" # specify an index
}
}
If [type] = = "es-error" {
Elasticsearch {
Hosts = > ["192.168.88.48 virtual 9200"]
Index = > "es-error-% {+ YYYY.MM.dd}" # specify an index
}
}
If [type] = = "nginx-log" {
Elasticsearch {
Hosts = > ["192.168.88.48 virtual 9200"]
Index = > "nginx-log-% {+ YYYY.MM.dd}" # specify an index
}
Stdout {
Codec= > rubydebug
}
}
}
For testing when something goes wrong:--
Nput {
File {
Path = > ["/ logs/nginx/80-access.log"]
Codec = > "json"
Start_position = > "beginning" # collect from scratch
Type = > "nginx-log"
}
}
Output {
If [type] = = "nginx-log" {
Elasticsearch {
Hosts = > ["192.168.88.48 virtual 9200"]
Index = > "nginx-80-log-% {+ YYYY.MM.dd}" # specify an index
}
}
Stdout {
Codec= > rubydebug
}
}
Test 6: using syslog to collect system logs
The vim / etc/rsyslog.conf setting allows files to be sent to port 514
*. * @ @ 192.168.88.48 virtual 514 # send logs to this port on this host
/ etc/init.d/rsyslog restart
Configuration file
Vim / etc/logstash/conf.d/04-syslog.conf
Input {
Syslog {
Type = > "system-syslog"
Host = > "192.168.88.48"
Port = > "514"
}
}
Output {
If [type] = = "system-syslog" {
Elasticsearch {
Hosts = > ["192.168.88.48 virtual 9200"]
Index = > "system-syslog-% {+ YYYY.MM.dd}"
}
Stdout {
Codec= > rubydebug
}
}
}
Restart rsyslog and there will be output.
Test 7:tcp log collection
Vim / etc/logstash/conf.d/05-tcp.conf
Input {
Tcp {
Host = > "192.168.88.48"
Port = > "6666"
}
}
Output {
Stdout {
Codec = > "rubydebug"
}
}
Use nc to write data to port 6666
Nc 192.168.88.48 6666 / dev/tcp/192.168.88.48/6666
-
Apache does not support json, so grok regular expressions are introduced.
To use grok, you must ensure that there is a plug-in: location
/ opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-patterns-core-2.0.2/patterns
[root@linux-node1 ~] # cat grok.conf
Input {
Stdin {}
}
Filter {
Grok {
Match = > {"message" = > "% {IP:client}% {WORD:method}% {URIPATHPARAM:request}% {NUMBER:bytes}% {NUMBER:duration}"}
}
}
Output {
Stdout {
Codec = > "rubydebug"
}
}
Input test: 55.3.244.1 GET / index.html 15824 0.043, output is available in regular format
Test 8, using logstash regular expressions to collect slowlog (slow query) mysql5.6.21 versions of mysql
Problem: multiline merge plug-in codec = > multilinc
Vim / etc/logstash/conf.d/07-mysql-slow.conf
Input {
File {
Path = > "/ root/slow.log"
Type = > "mysql-slow-log"
Start_position = > "beginning"
Codec = > multiline {
Pattern = > "^ # User@Host:"
Negate = > true
What = > "previous"
}
}
}
Filter {
# drop sleep events
Grok {
Match = > {"message" = > "SELECT SLEEP"}
Add_tag = > ["sleep_drop"]
Tag_on_failure = > [] # prevent default _ grokparsefailure tag on real records
}
If "sleep_drop" in [tags] {
Drop {}
}
Grok {
Match = > ["message" "(? M) ^ # User@Host:% {USER:user}\ [[^\]] +\] @ (?: (?\ S*))?\ [(?:% {IP:clientip})?\]\ s+Id:% {NUMBER:row_id:int}\ IP:clientip # Query_time:% {NUMBER:query_time:float}\ s+Lock_time:% {NUMBER:lock_time:float}\ s+Rows_sent:% {NUMBER:rows_ Sent:int}\ s+Rows_examined:% {NUMBER:rows_examined:int}\ s * (?: use% {DATA:database} \ s*)? SET timestamp=% {NUMBER:timestamp};\ s* (? (?\ w+)\ sroom.*)\ n#\ s* "]
}
Date {
Match = > ["timestamp", "UNIX"]
Remove_field = > ["timestamp"]
}
}
Output {
Stdout {
Codec = > "rubydebug"
}
}
After the configuration of all the above configuration files is completed, the startup method is the same as follows:
/ opt/logstash/bin/logstash-f / etc/logstash/conf.d/*.conf &
The effect picture is as follows:
A piece of data captured in the production is analyzed and statistically analyzed, and the results are as follows:
In the figure, you can clearly see the IP with a large number of visitors, access return status and other information.
Attachment: http://down.51cto.com/data/2366771
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.