Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

(high version) ELK (Elasticsearch + Logstash + Kibana) service building

2025-01-16 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)06/02 Report--

What the heck is ELK?

ELK is actually a collection of three tools, Elasticsearch + Logstash + Kibana, which form a set of practical and easy-to-use monitoring architecture. Many companies use it to build a visual massive log analysis platform.

1. ElasticSearch

ElasticSearch is a Lucene-based search server. It provides a full-text search engine with distributed multi-user capability, based on RESTful web interface. Elasticsearch is developed in Java and released as open source under the Apache license terms, and is currently a popular enterprise search engine. Designed for cloud computing, can achieve real-time search, stable, reliable, fast, easy to install and use.

2. Logstash

Logstash is a tool for managing logs and events, which you can use to collect logs, convert logs, parse logs and provide them as data to other module calls, such as search, storage, etc.

3. Kibana

Kibana is an excellent front-end log display framework, which can convert logs into various charts in great detail and provide users with powerful data visualization support.

Second, what are the advantages of ELK?

1. Powerful search function, elasticsearch can search quickly in the way of distributed search, and support the syntax of DSL to search, to put it simply, it is to filter data quickly through similar configuration language.

two。 The perfect display function can display very detailed chart information, and the display content can be customized to visualize the data incisively and vividly.

3. The distributed function can solve many problems in the operation and maintenance of large clusters, including monitoring, early warning, log collection and analysis, etc.

3. What is ELK usually used for?

In the operation and maintenance of mass log system, ELK component can be used to solve the following problems:

-centralized query and management of distributed log data

-system monitoring, including monitoring of system hardware and application components

-troubleshooting

-Security information and event management

-report function

The main problems that can be solved by ELK components in big data operation and maintenance system are as follows:

-Log query, problem troubleshooting, online check

-Server monitoring, application monitoring, error alarm, Bug management

-performance analysis, user behavior analysis, security vulnerability analysis, time management

Origin:

In the process of micro-service development, multiple servers are generally used for distributed deployment. How to collect the logs scattered in each server for analysis and processing is a factor that needs to be considered by micro-service services.

Set up a log system

To build a log system, you need to consider some factors:

What technology do you use? it is implemented by yourself and you also use off-the-shelf components.

Logs need to be defined in a uniform format

The log needs to have an anchor for global tracking

The first problem is that for small companies, they basically do not have their own R & D capabilities, and they definitely choose third-party open source components. ELK configuration is relatively simple, there is a ready-made UI interface, easy to retrieve log information, is the first choice.

The second problem is to use log4j2 to define a unified log format and use logstash to filter the log content.

The third problem is that there are several production modes of global tracking ID, one is to use UUID or generate random numbers, the other is to use the database to generate sequence number, and it can be obtained by customizing an id generation service. Taking into account the needs of their own services, we choose to generate random numbers to achieve.

How ELK works:

From the left, a logstash-agent will be deployed on each webserver. Its function is to listen to the log file in a way similar to tailf, and then send the newly added log to the redis queue. Logstash-indexer is responsible for taking the log from the corresponding queue of redis, processing the log into and outputting it to elasticsearch, and elasticsearch will index and collect the log as required. Finally, users can view and analyze the log through kibana.

Download the software package:

Elasticsearch:

Wget https://download.elasticsearch.org/elasticsearch/release/org/elasticsearch/distribution/tar/elasticsearch/2.2.1/elasticsearch-2.2.1.tar.gz

Logstash:

Wget https://download.elastic.co/logstash/logstash/logstash-2.2.2.tar.gz

Kibana:

Wget https://download.elastic.co/kibana/kibana/kibana-4.4.2-linux-x64.tar.gz

Environment building:

Note: jdk can search the installation method on the Internet.

Note: since the operation of Logstash depends on the Java environment, and version 1.5 of Logstash is no less than java 1.7, the latest version of Java is recommended. Since we only need the running environment of Java, we can only install JRE, but I still use JDK here, so please search for the installation yourself.

The network sources to be used for this construction:

Virtual machines can access the Internet:

Install the Java environment

Yum-y install java-1.8.0-openjdk*

Install Elasticsearch

Tar-xvf elasticsearch-2.2.1.tar.gz-C / usr/local/

Ln-s / usr/local/elasticsearch-2.2.1/ / usr/local/elasticsearch

Cd / usr/local/elasticsearc

Install the plug-in:

. / bin/plugin install mobz/elasticsearch-head

Plug-in installation method 1:

1.elasticsearch/bin/plugin-install mobz/elasticsearch-head

two。 Run es

3. Open http://localhost:9200/_plugin/head/

Plug-in installation method 2:

1. Https://github.com/mobz/elasticsearch-head download zip decompression

two。 Create the elasticsearch-1.0.0\ plugins\ head\ _ site file

3. Copy the files in the extracted elasticsearch-head-master folder to _ site

4. Run es

5. Open http://localhost:9200/_plugin/head/

Create users and directories (because elasticsearch 2.0.0 or later cannot be run with root users)

one

two

three

four

5 [root@localhost] # groupadd-g 1000 elasticsearch

[root@localhost] # useradd-g 1000-u 1000 elasticsearch

[root@localhost] # sudo-u elasticsearch mkdir / tmp/elasticsearch

[root@localhost] # ls / tmp/elasticsearch

[root@localhost] # sudo-u elasticsearch mkdir / tmp/elasticsearch/ {data,logs}

Mkdir / usr/local/elasticsearch/config/scripts

Edit profile vim config/elasticsearch.yml

Add the following four lines (notice the space after the colon):

Path.data: / tmp/elasticsearch/data

Path.logs: / tmp/elasticsearch/logs

Network.host: 192.168.100.10 (server IP)

Network.port: 9200

Start the service:

Sudo-u elastsearch / usr/local/elasticsearch/bin/elasticsearch

Note: if the formal application needs to run in the background

1sudo-u elastsearch / usr/local/elasticsearch/bin/elasticsearch-d

Note:

As you can see, its transmission port with other nodes is 9300, and the port that accepts HTTP requests is 9200.

# curl 192.168.100.10:9200

{

"name": "Wilson Fisk"

"cluster_name": "elasticsearch"

"version": {

"number": "2.2.1"

"build_hash": "d045fc29d1932bce18b2e65ab8b297fbf6cd41a1"

"build_timestamp": "2016-03-09T09:38:54Z"

"build_snapshot": false

"lucene_version": "5.4.1"

}

"tagline": "You Know, for Search"

}

Returns information showing the configured cluster_name and name, as well as the version of the installed ES.

The head plug-in just installed is a plug-in that uses browsers to interact with ES sets. It can view cluster status, cluster doc content, perform searches and ordinary Rest requests, etc. Now you can also use it to open the http://192.168.253.140:9200/_plugin/head/ page to view the ES cluster status:

The function above is still good!

[install Logstash--- data log storage and transfer]

It's just a collector, input and output, and we need to specify Input and Output for it (of course, Input and Output can be multiple). You can specify the log and output of input to elasticsearch

Tar xvf logstash-2.2.2.tar.gz-C / usr/local/

Ln-s / usr/local/logstash-2.2.2/ / usr/local/logstash

Test logstash

(1) screen input and output test

/ usr/local/logstash/bin/logstash-e 'input {stdin {}} output {stdout {}'

We can see what we enter logstash output in a certain format, where the-e parameter allows Logstash to accept settings directly from the command line. This is particularly quick to help us repeatedly test whether the configuration is correct without having to write the configuration file. Use the CTRL-C command to exit the previously run Logstash.

It is a common way to specify the configuration on the command line using the-e argument

However, if you need to configure more settings, it will take a long time. In this case, we first create a simple configuration file and specify that logstash use this configuration file. For example: create under the logstash installation directory

Configure logstash

Create a profile directory:

Mkdir-p / usr/local/logstash/etc

Vim / usr/local/logstash/etc/hello_search.conf

Enter the following:

# cat / usr/local/logstash/etc/hello_search.conf

Input {

Stdin {

Type = > "human"

}

}

Output {

Stdout {

Codec = > rubydebug

}

Elasticsearch {

Hosts = > "192.168.100.10 virtual 9200"

}

}

Launch: / usr/local/logstash/bin/logstash-f / usr/local/logstash/etc/hello_search.conf

(input as on-screen input, output to the screen in rubydebug format, and transfer to elasticsearch)

Test whether logstash logs are transferred to elasticsearch

Through the following interfaces:

Curl 'http://192.168.253.140:9200/_search?pretty'

So far, you have successfully used Elasticsearch and Logstash to collect log data.

[install kibana--- to display data]

Note: now that kibanna can have its own web service, bin/kibana can be started directly. It is recommended that you do not need nginx to start with. Use the web that comes with it.

Install Kibana

After downloading kibana, extract it to the corresponding directory to complete the installation of kibana.

Decompression, soft connection

one

2tar-xzvf kibana-4.4.2-linux-x64.tar.gz-C / usr/local/

Ln-s / usr/local/kibana-4.4.2-linux-x64/ / usr/local/kibana

Start kibanna

1/usr/local/kibana-4.4.2-linux-x64/bin/kibana

Or

1/usr/local/kibana/bin/kibana

The elasticsearch is not connected at this time.

Configure kibanna

Vim / usr/local/kibana-4.4.2-linux-x64/config/kibana.yml

Restart

/ usr/local/kibana/bin/kibana

Web access:

Listening to 5601 as web port

Use http://kibanaServerIP Kibana 5601 to access Kibana. After logging in, first configure an index. By default, the data of Kibana is pointed to the index, using the default index name of logstash-*, and it is time-based. Click "Create".

In order to use Kibana you must configure at least one index pattern. Index patterns are used to identify the Elasticsearch index to run search and analytics against. They are also used to configure fields.

In order to use Kibana later, you need to configure at least one Index name or Pattern, which is used to determine the Index in the ES during analysis

Add:

[configure logstash as Indexer]

Configure logstash as an indexer and store the log data of logstash to Elasticsearch. This example is mainly to index local system logs.

Cat / usr/local/logstash/etc/logstash-indexer.conf

Input {

File {

Type = > "syslog"

Path = > ["/ var/log/messages", "/ var/log/secure"]

}

Syslog {

Type = > "syslog"

Port = > "5544"

}

}

Output {

Stdout {codec= > rubydebug}

Elasticsearch {hosts = > "192.168.100.10 virtual 9200"}

}

Execute:

/ usr/local/logstash/bin/logstash-f / usr/local/logstash/etc/logstash-indexer.conf

Execute:

Echo "go Battle between Google alphago and Lee se-dol" > > / var/log/messages

The startup of each log collection is a separate process

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report