Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

[Zabbix] teaches you to build elasticsearch and implement zabbix docking

2025-02-24 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)06/02 Report--

one。 Set up elasticsearch

1. Upload jdk-8u181-linux-x64.tar.gz and elasticsearch-6.1.4.tar.gz files to any directory on the system

Install java

Extract the jdk-8u181-linux-x64.tar.gz file

Tar-zxvf jdk-8u181-linux-x64.tar.gz

Edit the profile file to add jdk environment variables

Vim / etc/profile

Add at the end

# JDK environment variable

JAVA_HOME=/usr/local/jdk1.8.0_181

CLASSPATH=$JAVA_HOME/lib/

PATH=$PATH:$JAVA_HOME/bin

Export PATH JAVA_HOME CLASSPATH

Reference environment variable

Source / etc/profile

Directly execute java-version to make sure java is available

Extract the elasticsearch-6.1.4.tar.gz file

Tar-zxvf elasticsearch-6.1.4.tar.gz

Edit and modify the configuration file elasticsearch.yml of elasticsearch

Vim elasticsearch-6.1.4/config/elasticsearch.yml

Mainly modify the following information (data storage directory and log storage directory need to add execution permissions,)

The configuration file is parsed as follows:

Enable the service (can be enabled using-d background)

/ usr/local/elasticsearch-6.1.4/bin/elasticsearch

Lewei Tips: when starting the service, you cannot start the service with a root user. Elasticsearch needs to enable the service with another user.

1. Users need to be created

Add es user

Useradd es-m

Set the es user password

Passwd es

2. The groups and users of elasticsearch-6.1.4 directory, data storage directory and log storage directory need to be modified.

Chown es:es / usr/local/elasticsearch-6.1.4/data

Chown es:es / usr/local/elasticsearch-6.1.4/logs

Chown es:es / usr/local/elasticsearch-6.1.4/

3 、 ERROR:bootstrap checks failed: max file descriptors [4096] for elasticsearchprocess likely too low, increase to at least [65536]

Reason: unable to create local files, the maximum number of files that users can create is too small

Solution:

Switch to the root user and edit the limits.conf profile

Add something like the following:

Vi/etc/security/limits.conf

Add the following:

* soft nofile 65536

* hard nofile 131072

* soft nproc 2048

* hard nproc 4096

Note: * represents all user names of Linux (such as hadoop)

Save, log out, and log back in to take effect.

4 、 max virtualmemory areas vm.max_map_count [65530] likely too low, increase to at least [262144]

Reason: the maximum virtual memory is too small

Solution: switch to root user and modify the configuration file sysctl.conf

Vi / etc/sysctl.conf

Add the following configuration:

Vm.max_map_count=655360

And execute the command: sysctl-p

5 、 max number ofthreads [1024] for user [es] likely too low, increase to at least [2048]

Reason: unable to create local thread problem, the maximum number of threads that users can create is too small

Solution:

Switch to the root user, go to the limits.d directory, and modify the 90-nproc.conf configuration file.

Vi / etc/security/limits.d/90-nproc.conf

Modify * soft nproc 1024 to * soft nproc 2048

Start the service as es user (- d can be started in the background)

Su es

/ usr/local/elasticsearch-6.1.4/bin/elasticsearch

Open firewall port 9200 TCP

Firewall-cmd-zone=public-add-port=9200/tcp-permanent

Enter IP: 9200 on WEB to test whether it can be accessed and used properly.

two。 Docking with zabbix

Need to modify the configuration file of zabbix_server

Modify HistoryStorageURL= http://192.168.150.10:9200

Modify the zabbix.conf.php file

Vi/usr/local/nginx/html/zabbix/conf/zabbix.conf.php

Restart zabbix_server

After the synchronization is successful

Method 1. ElasticSearch Head can be installed directly on Google browser (recommended)

Drag the file directly to the extension to add

Method 2. Here we introduce a visual management plug-in elasticsearch-head for Elasticsearch, which can easily view, delete and manage data.

Nodejs support is required to install the elasticsearch-head plug-in

1. Nodejs installation

2. Head installation configuration

Please refer to https://blog.csdn.net/zoubf/article/details/79007908

three。 Elasticsearch common commands

(because there is no built-in data cleaning feature, you need to write scripts manually)

1. The use of curl command

Curl-X'

< PROTOCOL>

: / /:

< PORT>

/

< PATH>

?'- d''VERB HTTP method: GET, POST, PUT, HEAD, DELETE

PROTOCOL http or https protocol (only available when there is a https agent in front of Elasticsearch)

Hostname of any node in the HOST Elasticsearch cluster

The port on which the PORT Elasticsearch HTTP service is located, which defaults to 9200

PATH API path, resource path (for example, _ count will return the number of documents in the cluster)

QUERY_STRING some optional query request parameters, such as? pretty parameter, will return readable JSON data

BODY a request body in JSON format (if the request is required)

For more details, please refer to

Https://www.linuxidc.com/Linux/2016-10/136548.htm

Add an index (zabbix is the database name)

Curl-XPUT' http://192.168.1.20:9090/zabbix?pretty'

After successful creation, five master shards are assigned by default (cannot be changed after creation, and data is stored in one of the five shards by algorithm, adding or deleting changes are all for the primary shard) and one replication shard (modifiable, each primary shard corresponds to a replication shard). These two default values can be modified in the configuration file or specified at the time of creation, as shown in

Curl-XPUT 'http://192.168.1.20:9090/zabbix?pretty'-d' {

"settings": {

"number_of_shards": 2, # 2 main shards

"number_of_replicas": 0 # 0 replication shards

}

}'

View Index

Curl-XGET' http://192.168.1.20:9090/zabbix?pretty'

Query mode

For example:

Query all documents

Equivalent to curl-XGET 'http://localhost:9200/test/article/_search?pretty'

Curl-XGET 'http://localhost:9200/test/article/_search?pretty'-d'

{"query": {

"match_all": {}

}

}'# return

{

# milliseconds

"took": 4

"timed_out": false

# sharding information

"_ shards": {

"total": 5

"successful": 5

"failed":

}

"hits": {

# number of documents

"total": 3

"max_score": 1.0

"hits": [{

"_ index": "test"

"_ type": "article"

"_ id": "AVf_6fM1vEkwGPLuUJqp"

"_ score": 1.0

"_ source": {

"id": 2

"subject": "second article title"

"content": "content of the second article"

"author": "jam"

}

}, {

"_ index": "test"

"_ type": "article"

"_ id": "4"

"_ score": 1.0

"_ source": {

"id": 4

"subject": "fourth article title"

"content": "the content of the fourth article-updated"

"author": "tomi"

}

}, {

"_ index": "test"

"_ type": "article"

"_ id": "3"

"_ score": 1.0

"_ source": {

"id": 3

"subject": "third article title"

"content": "content of the third article"

"author": "jam"

}

}}

The query author is a document whose name contains "jam" and returns documents with id of 2 and 3

Curl-XGET 'http://localhost:9200/test/article/_search?pretty'-d'

{

"query": {

"match": {

"author": "jam"

}

}

}'

Query documents containing "updates" in the article content and return documents whose id is 4

Curl-XGET 'http://localhost:9200/test/article/_search?pretty'-d'

{"query": {

"match": {

"content": "update"

}

}

}'

Query all indexes

Curl-XGET 'http://192.168.150.10:9200/_cat/indices/?v'

Delete all data, including self-added indexes

Curl-XDELETE 'http://192.168.150.10:9200/*'

2. Elasticsearch cleans up data

Since Elasticsearch does not have its own data deletion configuration, a script is required to clean up the data

1) deleting an index immediately frees up space, and there is no so-called "tag" logic.

2) when you delete a document, you write the new document and mark the old document as deleted. Whether disk space is freed or not depends on whether the new and old documents are in the same segment file, so the segment merge behind the ES may trigger the physical deletion of the old documents during the process of merging the segment file.

But because there may be hundreds of segment file in a shard, there is still a good chance that new and old documents exist in different segment and cannot be physically deleted. To free up space manually, you can only do force merge on a regular basis and set max_num_segments to 1.

Delete document

Free up space

Configure scheduled tasks

Download password for jdk-8u181-linux-x64.tar.gz and elasticsearch-6.1.4.tar.gz files: x6I7

Original address

Teach you how to build elasticsearch and implement zabbix docking

Http://www.lwops.cn/forum.php?mod=viewthread&tid=70

(source: Lewei _ one-stop operation and maintenance monitoring and management platform)

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report