In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-19 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >
Share
Shulou(Shulou.com)06/02 Report--
This article mainly explains "how to use intermediate components to configure ELK+logback to build a log system", interested friends may wish to have a look. The method introduced in this paper is simple, fast and practical. Next let the editor to take you to learn "how to use intermediate components to configure ELK+logback to build a log system"!
The intermediate component of the main project interface document configures ELK+logback to build a log system to study ES. The latest version of ES is 5.6.3, and version 5.0.1 and above jdk is 1.8. The redis used by the cache server will record the steps for installing the ES5.1.1 + Kibana-5.6.3 + es-head plug-in on the Linux environment. First of all, go to the Elastic official website https://www.elastic.co/downloads, the download page has all the Elastic products, are the latest version (the version number is also the same). First, elasticsearch installation, after the download is complete, decompress it all, use unzip and tar-xvzf (1), first try to start elasticsearch, go to elasticsearch's bin directory, and execute. / elasticseharch. If you encounter the following error: (2), the description is to start with a root account, because ES has no permission restrictions, and you can also receive user scripts, so it is not safe to use a root account, so you need to create a new account to start. Command execution is as follows: # adduser es # passwd es123456 to grant permissions to the new user with root, execute chmod + x elasticsearch 2, chown es.es-R / opt/elasticsearch-5.1.1 3 under the bin directory, switch to the new user # su es 4, Modify vm.map to restrict vi / etc/sysctl.conf vm.max_map_count=262144 5, go to the bin directory to execute. / elasticsearch-d & then you can use curl http://localhost:9200?pretty on the command line to look at the output at this time ES can be accessed locally, but it cannot be accessed remotely with a browser, because the corresponding port is not opened (3) and the es configuration file is modified. Go to the installation directory / config/elasticsearch.yml Set network.host to 0.0.0.0 and release the access port # = Elasticsearch Configuration = #-- Cluster-- `cluster.name: elk- Es` #-Node-node.name: node-1 #-- -Paths-- path.data: / data/program/elk/es/data path.logs: / data/program/elk/es/logs # #- -Memory-- # # Lock the memory on startup: # # bootstrap.memory_lock: true #-- Network -- network.host: 10.100.0.222 http.port: 9200 #-- Discovery -then restart es and access it outside the browser (4), There is a problem with starting the execution of ES . / elasticsearch [1]: max file descriptors [4096] for elasticsearch process is too low, increase to at least [65536] means that your process is not enough for the solution: cut to root user: enter limits.conf under the security directory Execute the command vim / etc/security/limits.conf to add the following parameter values at the end of the file: * soft nofile 65536 * hard nofile 131072 * soft nproc 65536 * hard nproc 65536 before the * symbol must be taken, and then restart it. After the execution is completed, you can use the command ulimit-n to view the number of processes [2]: max virtual memory areas vm.max_map_count [65530] is too low, increase to at least [262144] need to modify the maximum value of the system variable [3]: solution: switch to root user to modify the configuration sysctl.conf to increase the configuration value: vm.max_map_count=655360 executes the command sysctl-p. Then restart the ES service (5) start the execution ES file, go to the bin directory to execute the. / elasticsearch command, execute. / elasticesrarch-d is running in the background, if there is no problem, it can be safely generated Then execute the curl 'http:// self-configured IP address: 9200 Universe' command The following results appear {"name": "node-1", "cluster_name": "elk-es", "cluster_uuid": "Q04zG6ESQjyjXvZtVrRysA", "version": {"number": "6.2.3", "build_hash": "c59ff00" "build_date": "2018-03-13T10:06:29.741383Z", "build_snapshot": false, "lucene_version": "7.2.1", "minimum_wire_compatibility_version": "5.6.0", "minimum_index_compatibility_version": "5.0.0"}, "tagline": "You Know" For Search "} (VI), install the head plug-in
Download head installation package, download address: https://github.com/mobz/elasticsearch-head/archive/master.zip. This is downloaded from git and then uploaded to the virtual machine. Since the head plug-in cannot be placed in the elasticsearch-5.6.3 folder, the head plug-in needs to be placed and executed separately.
So extract the head plug-in in the elasticsearch-5.6.3 sibling directory; the name of the extracted file, as shown in the figure
[root@redis-node1 elk] # wget https://github.com/mobz/elasticsearch-head/archive/master.zip-- 2018-07-10 11 root@redis-node1 elk 55 root@redis-node1 elk 41 root@redis-node1 elk-https://github.com/mobz/elasticsearch-head/archive/master.zip Resolving github.com (github.com) 13.229.188.5952.74.223.119,13.250.177.223 Connecting to github.com (github.com) | 13.229.188.59 |: 443. Connected. HTTP request sent, awaiting response... 302 Found Location: https://codeload.github.com/mobz/elasticsearch-head/zip/master [following]-- 2018-07-10 11 purl 55 purl 42-https://codeload.github.com/mobz/elasticsearch-head/zip/master Resolving codeload.github.com (codeload.github.com). 54.251.140.56, 13.250.162.133, 13.229.189.0 Connecting to codeload.github.com (codeload.github.com) | 54.251.140.56 |: 443. Connected. HTTP request sent, awaiting response...
6.1.2 elasticsearch-head extraction and installation
[root@redis-node1 es-head] # pwd / data/program/elk [root@redis-node1 elk] # unzip master.zip-d. / [root@redis-node1 elk] # ll total 904 drwxr-xr-x 6 root root 4096 Sep 15 2017 elasticsearch-head-master drwxr-xr-x 9 elk elk 188 Jul 10 11:42 es Drwxrwxr-x 13 elk elk 260 Jul 6 12:47 kibana drwxr-xr-x 12 elk elk 278 Jul 10 10:43 logstash-rw-r--r-- 1 root root 921421 Jul 10 11:55 master.zip [root@redis-node1 elk] # mv elasticsearch-head-master es-head [root@redis-node1 elk] # ll total 904 Drwxr-xr-x 9 elk elk 188 Jul 10 11:42 es drwxr-xr-x 6 root root 4096 Sep 15 2017 es-head drwxrwxr-x 13 elk elk 12:47 kibana drwxr-xr-x 12 elk elk 278 Jul 10 10:43 logstash-rw-r--r-- 1 root root 921421 Jul 10 11:55 master.zip [root@redis -node1 es-head] # cd es-head [root@redis-node1 es-head] # pwd / data/program/elk/es-head
6.2.Executing the head plug-in requires the support of node.js, so install a node.js first
6.2.1 execute command 1: curl-sL https://rpm.nodesource.com/setup_8.x | bash-
Curl-sL https://rpm.nodesource.com/setup_8.x | bash-# # Installing the NodeSource Node.js 8.x LTS Carbon repo... # # Inspecting system... + rpm-Q-- whatprovides redhat-release | | rpm-Q-- whatprovides centos-release | | rpm-Q-- whatprovides cloudlinux-release | | rpm-Q-whatprovides sl-release + uname-m # # Confirming "el7-x86_64" is supported... + curl-sLf-o / dev/null 'https://rpm.nodesource.com/pub_8.x/el/7/x86_64/nodesource-release-el7-1.noarch.rpm' # # Downloading release setup RPM... + mktemp + curl-sL-o'/ tmp/tmp.SgDKQBVM0p' 'https://rpm.nodesource.com/pub_8.x/el/7/x86_64/nodesource-release-el7-1.noarch.rpm' # # Installing release setup RPM... + rpm-I-- nosignature-- force'/ tmp/tmp.SgDKQBVM0p' # # Cleaning up... + rm-f'/ tmp/tmp.SgDKQBVM0p' # # Checking for existing installations... + rpm-qa 'node | npm' | grep-v nodesource # # Run `sudo yum install-y nodejs` to install Node.js 8.x LTS Carbon and npm. # # You may also need development tools to build native addons: sudo yum install gcc-c++ make # # To install the Yarn package manager, run: curl-sL https://dl.yarnpkg.com/rpm/yarn.repo | sudo tee / etc/yum.repos.d/yarn.repo sudo yum install yarn
6.2.2 Command 2: yum install-y nodejs
Yum install-y nodejs Loaded plugins: fastestmirror base | 3.6 kB 00:00:00 epel | | 3.2 kB 00:00:00 extras | 3.4 kB 00:00:00 mysql-connectors-community | | | 2.5 kB 00:00:00 mysql-tools-community | 2.5 kB 00:00:00 mysql57-community-dmr | | | 2.5 kB 00:00:00 nginx | 2.9 kB 00:00:00 nodesource | | | 2.5 kB 00:00:00 updates | 3.4 kB 00:00:00 | (1ax 7): epel/x86_64/group_gz | 88 kB 00:00:00 (2max 7): epel/x86_64/updateinfo | | 926 kB 00:00:00 (3 MB 7): epel/x86_64/primary | 3.5 MB 00:00:00 (4 MB 7): extras/7/x86_64/primary_db | | | 150 kB 00:00:00 (5amp7): updates/7/x86_64/primary_db | 3.6MB 00:00:00 (6lap7): nodesource/x86_64/primary_db | | | 37 kB 00:00:01 (7 kB 7): nginx/x86_64/primary_db | 35 kB 00:00:02 Determining fastest mirrors | * base: mirrors.zju.edu.cn * extras: mirrors.aliyun.com * updates: mirrors.aliyun.com epel 12605 Resolving Dependencies-- > Running transaction check-- > Package nodejs.x86_64 21nodesource will be installed 8.11.3-- > Finished Dependency Resolution Dependencies Resolved = Package Arch Version Repository Size = Installing: nodejs x86'64 2 M Transaction Summary 8.11.3-1nodesource nodesource 17 M Transaction Summary = = Install 1 Package Total download size: 17 M Installed size: 51 M Downloading packages: nodejs-8.11.3-1nodesource.x86_64.rpm 8% [=] 38 kB/s | 1.4 MB 00:06:56 ETA warning: / var/cache/yum/x86_64/7/nodesource/packages/nodejs-8.11.3-1nodesource.x86_64.rpm: Header V4 RSA/SHA256 Signature Key ID 34fa74dd: NOKEY Public key for nodejs-8.11.3-1nodesource.x86_64.rpm is not installed nodejs-8.11.3-1nodesource.x86_64.rpm | 17 MB 00:07:52 Retrieving key from file:///etc/pki/rpm-gpg/NODESOURCE-GPG-SIGNING -KEY-EL Importing GPG key 0x34FA74DD: Userid: "NodeSource" Fingerprint: 2e55 207a 95d9 944b 0cc9 3261 5ddb e8d4 34fa 74dd Package: nodesource-release-el7-1.noarch (installed) From: / etc/pki/rpm-gpg/NODESOURCE-GPG-SIGNING-KEY-EL Running transaction check Running transaction test Transaction test Succeeded Running transaction Warning: RPMDB altered outside of yum. Installing: 2:nodejs-8.11.3-1nodesource.x86_64 1 Compact 1 Verifying: 2:nodejs-8.11.3-1nodesource.x86_64 1 Universe 1 Installed: nodejs.x86_64 2 virtual 8.11.3-1nodesource Complete!
6.2.3 OK, after the execution is complete, you can use the command node-v to verify that the installation is successful, and npm is also installed successfully; the command npm-v can also be verified.
[root@redis-node1 ~] # node-v 8.11.3 [root@redis-node1 ~] # npm-v 5.6.0 [root@redis-node1 ~] #
Install grunt, which must be installed because the execution file of the head plug-in is executed by the grunt command
6.3.1 installation command 1: npm install grunt-- save-dev command 2: npm install
[root@redis-node1] # npm install grunt-- save-dev npm WARN saveError ENOENT: no such file or directory, open'/ root/package.json' npm notice created a lockfile as package-lock.json. You should commit this file. Npm WARN enoent ENOENT: no such file or directory, open'/ root/package.json' npm WARN root No description npm WARN root No repository field. Npm WARN root No README data npm WARN root No license field. + grunt@1.0.3 added 96 packages in 32.604s [root@redis-node1 ~] # npm install npm WARN saveError ENOENT: no such file or directory, open'/ root/package.json' npm WARN enoent ENOENT: no such file or directory, open'/ root/package.json' npm WARN root No description npm WARN root No repository field. Npm WARN root No README data npm WARN root No license field. Up to date in 0.996s [root@redis-node1 ~] #
6.3.2 modify the configuration file. Cd goes to the elasticsearch-head-master folder and executes the command vim Gruntfile.js file: add the hostname attribute and set it to *, as shown below:
Connect: {server: {options: {port: 9100, `hostname:* `base:'.', keepalive: true}
6.3.3 modify vi _ site/app.js file: modify the connection address of head, as shown in the figure:
(function (app, i18n) {var ui = app.ns ("ui"); var services = app.ns ("services") App.App = ui.AbstractWidget.extend ({defaults: {base_uri: null}, init: function (parent) {this._super (); this.prefs = services.Preferences.instance () This.base_uri = this.config.base_uri | | this.prefs.get ("app-base_uri") | | * * "http://10.100.0.222:9200"**; If (this.base_uri.charAt (this.base_uri.length-1)!) {/ / XHR request fails if the URL is not ending with a "/" this.base_uri + = "/";}
6.3.4 Last command: grunt server & OK after the execution is complete
6.3.5 the issues involved cannot be accessed properly on the web page; check to see if the firewall is turned off
6.3.5.1 execute the command service iptables status to view status Just turn off the firewall and execute the command service iptables stop. The final result is as follows: I did not configure the cluster: note that the port number used below is no longer 9200 but 9100 in the head plug-in. Do you see the health value that appears above? there is still a problem with the connection. The solution is to modify the cd command to enter the elasticsearch-5.6.3 / config file and add it under the vi elasticsearch.yml file:
Http.cors.enabled: true http.cors.allow-origin: "*"
6.4. then re-execute ES. / elasticsearch successfully. This is the result of the execution.
Step 2 now let's install logstash and related environment configuration (1), download the logstash installation package logstash-6.2.3.zip wget https://artifacts.elastic.co/downloads/logstash/logstash-6.3.1.tar.gz tar-xzf logstash-6.3.1.tar.gz mv logstash-6.3.1-linux-x86_64 logstash ch logstash (2), and add the configuration file logback-es.config to the logback program config The contents are as follows Input {tcp {# # host:port is the destination in the above appender. Logstash is actually used as a service here. Open port 9601 to receive messages from logback host = > "100.10.0.222" port = > 9601 # Mode selection is server mode = > "server" tags = > ["logback_trace_id"] traceId = > ["logback_trace_id"] # # format json codec = > json_lines}} output {elasticsearch {# ES address hosts = > "100.10.0.222ES 9200" # specify the index name Does not apply default, used to distinguish each project index = > "% {[serverName]} -% {+ YYYY.MM.dd}"} stdout {codec = > rubydebug} (3), create startup script command touch start.sh in logback program Chmod 744 start.sh #! / bin/sh APP_PATH=/data/program/elk/logstash nohup sh bin/logstash-f $APP_PATH/config/logstash-es.conf > $APP_PATH/logs/logstash-log.log 2 > & 1 & echo logstash run (IV). Add the following information to the configuration file (logback.xml) of the application program:
(1) gc.server.project.name is the project name and is configured in application.properties
(2) the service ip and port of TcpSocket provided by gc.server.ip.port for Logstash
${gc.server.ip.port} {"serverName": "${gc.server.project.name}"} step 3. Now let's install the kibana program software installation and configuration (1), Download the kibana installation package kibana-6.2.3-linux-x86_64.tar.gz wget https://artifacts.elastic.co/downloads/kibana/kibana-6.3.1-linux-x86_64.tar.gz shasum-a 512 kibana-6.3.1-linux-x86_64.tar.gz tar-xzf kibana-6.3.1-linux-x86_64.tar.gz cd kibana-6.3.1-linux-x86_64/ (2), extract and install And configure [elk@redis-node1 program] $tar-xvf kibana-6.2.3-linux-x86_64.tar.gz [elk@redis-node1 program] $mv kibana-6.2.3-linux-x86_64 kibana (3), Project configuration config/kibana.yml [elk@redis-node1 program] $vi config/kibana.yml server.port: 5601 server.host: "10.100.0.222" # elasticsearch.url: "http://localhost:9200" elasticsearch.url:" http://10.100.0.222:9200" (4), create startup script command touch start.sh in kibana program Chmod 744 start.sh #! / bin/sh APP_PATH=/data/program/elk/kibana nohup sh bin/kibana > $APP_PATH/logs/kibana-log.log 2 > & 1 & echo kibana run # bin/kibana & so far, I believe you have a deeper understanding of "how to use intermediate components to configure ELK+logback to build a log system". You might as well do it in practice. Here is the website, more related content can enter the relevant channels to inquire, follow us, continue to learn!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.