In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-02-24 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)06/02 Report--
Blog address transferred to: cocojoey.lofter.com/post/1eff2f40_10a6d448
This will not be updated in the future...
Log management tools: collection, parsing, visualization
Elasticsearch -A Lucene-based document store primarily used for log indexing, storage, and analysis.
Logstash -Tool for managing events and logs
Kibana -Visualize log and timestamp data
Graylog2 -Pluggable logging and event analysis server with alarm options
Nxlog-Cross-platform, modular, log collection artifact with log buffering and flow control, scheduled jobs, built-in configuration language
Graylog vs ELK
ELK:Elasticsearch+Logstash+Kibana
Graylog: Elasticsearch+Nxlog+Graylog-server(Graylog-web integration)
Graylog architecture diagram:
Minimization architecture diagram:
Graylog Cluster Architecture Diagram:
ELK Architecture:
A few words: Splunk, the so-called Google of the journaling world. And Nxlog, I won't go into details about how awesome they are. Graylog is an open-source version of Splunk.
This time, we will minimize the installation, and the deployment of the cluster solution will be updated one after another:
Installation components:
Mongodb
Elasticsearch
Graylog-server (Graylog-web integrated)
Graylog Collector Sidecar (old version Graylog Collector, deprecated)
Installation environment:
Centos7.3+Graylog2.3+elasticsearch3.4.5+Nxlog2.9+Collector-Sidecar-0.1.3
Host IP: 192.168.55.33 This configuration server and client are on the same host
Part 1: Server-side deployment
MongoDB:
1: Install yum source for mongodb
vim /etc/yum.repos.d/mongodb-org-3.2.repo
[mongodb-org-3.2]
name=MongoDB Repository
baseurl=https://repo.mongodb.org/yum/redhat/$releasever/mongodb-org/3.2/x86_64/
gpgcheck=1
enabled=1
gpgkey=https://www.mongodb.org/static/pgp/server-3.2.asc
2: Install mongodb
yum install mongodb-org
3: Add system services and startup
chkconfig --add mongod
systemctl daemon-reload
/sbin/chkconfig mongod on
systemctl start mongod.service
Note: There is no mongodb configuration here, including graylog connection configuration. When graylog starts, it will create relevant data by itself.
Elasticsearch:
1: Import Elastic GPG key first
rpm --import https://packages.elastic.co/GPG-KEY-elasticsearch
2: Add yum source
vim /etc/yum.repos.d/elasticsearch.repo
[elasticsearch-2.x]
name=Elasticsearch repository for 2.x packages
baseurl=https://packages.elastic.co/elasticsearch/2.x/centos
gpgcheck=1
gpgkey=https://packages.elastic.co/GPG-KEY-elasticsearch
enabled=1
3: Install elasticsearch
yum install elasticsearch
4: Configure elasticsearch, modify the following places, elasticsearch detailed configuration by Google
vim /etc/elasticsearch/elasticsearch.yml
cluster.name graylog2 elasticsearch Cluster name, if there are multiple clusters, you can distinguish them according to this attribute.
node.name node-142 Cluster node name, automatically created when elasticsearch is started, or manually configured
network.host: 192.168.55.33 Set bound IP address, can be ipv4 or ipv6, default is 0.0.0.0
http.port: 9200 Sets the Http port for external services. The default is 9200.
transport.tcp.port: 9300 Sets the tcp port for interaction between nodes, default is 9300
discovery.zen.ping.unicast.hosts: ["192.168.55.33"] Sets the initialization list of master clusters in the cluster. Machines in this array will be automatically discovered and joined to the cluster, separated by commas.
5: Add to System Services and Startup
chkconfig --add elasticsearch
systemctl daemon-reload
systemctl enable elasticsearch.service
systemctl restart elasticsearch.service
Graylog:
1: Install yum and epel sources for Graylog
rpm -Uvh https://packages.graylog2.org/repo/packages/graylog-2.3-repository_latest.rpm
yum install epel-release
2: Install Graylog-server and related components
yum install graylog-server pwgen
3: Configure graylog2, modify the following places, other places to keep the default, can also be set according to the actual situation, detailed explanation of Google itself
password_secret WxWxFDNy36Wgl3VMQoFVyCdJl5TpiilNPujRBW3xoyYx5cB8aP8N
Add salt to the password (that is, add salt to the password and add a long string of characters to encrypt it), for example
md5(md5(password)+salt) and SHA512(SHA512(password)+salt) modes
Here we use pwgen to generate the password: pwgen -N 1 -s 96
root_password_sha2 72d7c50d4e1e267df628ec2ee9eabee
Graylog-web login user password, encrypted using sha256sum.
Password encryption command: echo -n yourpassword| sha256sum
root_email alarm address, this blog does not involve email alarm related content, do not care, update later
root_timezone = +08: 00 Set timezone
rest_listen_uri = http://192.16855.33:9000/api/
Used to receive heartbeat information sent by Graylog Collector Sidecar, collectors can also access secondary uri
web_listen_uri = http://192.168.0.200:9000/ graylog-web access address
elasticsearch_cluster_name = graylog2 must be the same as elasticsearch settings
elasticsearch_discovery_zen_ping_unicast_hosts = 192.168.55.33:9300
Add hosts to elasticsearch cluster, multiple hosts separated by commas
mongodb_uri = mongodb://localhost/graylog MongoDB server authentication, use default
4: Add system services and startup
chkconfig --add graylog-server
systemctl daemon-reload
systemctl enable graylog-server.service
systemctl start graylog-server.service
At this point, Graylog-server installation is complete, you can enter 192.168.55.33:9000 through the browser to access, login, the above configuration file design
Password encryption option must be set and password must be generated using specified password tool to login properly, otherwise it will fail
Part 2: Deployment on the Collector Side
1: Brief Description
Graylog Collector Sidecar is a lightweight configuration management system for collecting logs, also known as backend.
Run as a daemon.
These configurations are centrally managed graphically through the Graylog Web interface. For specific needs, the original can be
Back-end configurations (called Snippets) are stored directly into Graylog.
Using the REST API, the Sidecar daemon periodically fetches all relevant configurations for the target. What configurations are actually obtained
Depending on the "label" defined in the Sidecar profile of the host. For example, Web server hosts can
Includes linux and nginx tags.
Sidecar generates (renders) the relevant backend configuration file the first time it runs or detects a configuration change.
It will then start or restart those reconfigured log collectors.
Sidecar currently supports Nxlog, Filebeat and Winlogbeat. The supported functions are almost identical and can be found in
web interface, and for all collectors, GELF output with SSL encryption can be used.
On the server side, you can share input with multiple collectors, such as All Filebeat and Winlogbeat.
Instance can send logs to a single Graylog-Beats input.
This blog is configured with Nxlog as the backend.
1: Install Nxlog and collector-sidecar
yum source now no ollector-sidecar, go to the official download rpm package to install.
yum install collector-sidecar-0.1.3-1.x86_64.rpm nxlog-ce-2.9.1716-1_rhel7.x86_64.rpm
2: Add system services and user authorization
gpasswd -a nxlog root
chown -R nxlog.nxlog /var/spool/collector-sidecar/nxlog
graylog-collector-sidecar -service install
systemctl start collector-sidecar
3: Configure Nxlog
vim /etc/nxlog.conf
Module xm_gelf
Moduleim_file
File"/var/log/messages"
Moduleom_udp
Host 192.168.55.33
Port 12201
Pathin => out
4: Configure Colloector
vim /etc/graylog/collector-sidecar/collector_sidecar.yml
server_url: http://192.168.55.33:9000/api/
update_interval: 10
tls_skip_verify: false
send_status: true
list_log_files:
node_id: graylog-collector-sidecar
collector_id: file:/etc/graylog/collector-sidecar/collector-id
cache_path: /var/cache/graylog/collector-sidecar
log_path: /var/log/graylog/collector-sidecar
log_rotation_time: 86400
log_max_age: 604800
tags:
- nginx
backends:
- name: nxlog
enabled: true
binary_path: /usr/bin/nxlog
configuration_path: /etc/graylog/collector-sidecar/generated/nxlog.conf
5: Restart the service
systemctl restart collector-sidecar systemctl restart-nxlog
At this point, Collector related things are configured, followed by Graylog-Web related configuration
Graylog-Web
1: Enter 192.168.55.33 in the browser to enter the management interface and configure input and output related information.
Click on collectors --> manage configurations --> create configurations
Name of collector
tags Tag name (same as collector client profile)
Set output and input related information, same as nxlog configuration file
After configuration, restart collector-sidecar of client: systemctl restart collector-sidecar
graylog web Check if Collectors is working correctly:
graylog web settings log receiving
System --> inputs --> Select which way to read logs--> Set related properties (set IP address on server side)--> Save
Check to see if it is working properly
search: Regular matching can be performed to highlight results
So far, all installed, normal use, if there is no understanding or can not work properly, please leave a message or add a group discussion: 656633543
Thank you for coming, my friends.
This blog will be updated continuously.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.