In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-16 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Development >
Share
Shulou(Shulou.com)06/02 Report--
This article is to share with you about the installation and use of Elasticsearch, the editor thinks it is very practical, so I share it with you to learn, I hope you can get something after reading this article, say no more, follow the editor to have a look.
ElasticSearch is a Lucene-based search server. It provides a full-text search engine with distributed multi-user capability, based on RESTful web interface. Developed in the Apache language and released as open source under the Apache license terms, Java is a popular enterprise search engine.
Elastic
Elastic has a complete product line and solutions: Elasticsearch, Kibana, Logstash and so on. The three mentioned above are the so-called ELK technology stack.
Elasticsearch
Elasticsearch official website: https://www.elastic.co/cn/products/elasticsearch
As mentioned above, Elasticsearch has the following characteristics:
Distributed, no need to build clusters manually (solr needs to be configured manually, using Zookeeper as the registry)
Restful style, all API follow the Rest principle, easy to use
Near real-time search, data updates are almost completely synchronized in Elasticsearch.
Version
Currently, the latest version of Elasticsearch is 6.3.1, so we will use 6.3.0
Virtual machine JDK1.8 or above is required
Installation and configuration
In order to simulate the real scene, we will install Elasticsearch under linux.
Create a new user leyou
For security reasons, elasticsearch is not allowed to run under the root account by default.
Create a user:
Useradd leyou
Set the password:
Passwd leyou
Switch users:
Su-leyou
Upload the installation package and extract it
We upload the installation package to: / home/leyou directory
Unzip:
Tar-zxvf elasticsearch-6.2.4.tar.gz
Delete the package:
Rm-rf elasticsearch-6.2.4.tar.gz
Let's rename the directory:
Mv elasticsearch-6.2.4/ elasticsearch
Enter and view the directory structure:
Modify configuration
Let's go to the config directory: cd config
There are two configuration files that need to be modified:
Jvm.options
Elasticsearch is based on Lucene, while the underlying Lucene is implemented by Java, so we need to configure the jvm parameter.
Edit jvm.options:
The default configuration of vim jvm.options is as follows:-Xms1g-Xmx1g takes up too much memory, so let's make it smaller:-Xms512m-Xmx512melasticsearch.ymlvim elasticsearch.yml modify data and log directory: path.data: / home/leyou/elasticsearch/data # data directory location path.logs: / home/leyou/elasticsearch/logs # log directory location
We changed the data and logs directories to the elasticsearch installation directory. But these two directories don't exist, so we need to create them.
Go to the root directory of elasticsearch and create:
Mkdir datamkdir logs
Modify the bound ip:
Network.host: 0.0.0.0 # bind to 0.0.0.0, allowing any ip to access
Only local access is allowed by default, and can be accessed remotely when modified to 0.0.0.0
At present, we are doing stand-alone installation. If you want to do a cluster, you only need to add other node information to this configuration file.
Additional configurable information for elasticsearch.yml:
The attribute name indicates the cluster name of the cluster.name configuration elasticsearch, which defaults to elasticsearch. It is suggested that it be changed to a meaningful name. Node.name node name. Es will specify a name randomly by default. It is recommended to specify a meaningful name to facilitate the management of the storage path of path.conf configuration files. Tar or zip package installation defaults to the config folder under the es root directory, rpm installation defaults to / etc/ elasticsearchpath.data to set the storage path for index data, and default is the data folder under the es root directory. Multiple storage paths can be set. Use commas to separate path.logs to set the storage path of log files. The default is the logs folder under the es root directory, path.plugins sets the plug-in storage path, and the default is the plugins folder under the es root directory. Bootstrap.memory_lock is set to true to lock the memory used by ES, avoiding memory swapnetwork.host settings, bind_host and publish_host. Set to 0.0.0.0 to allow public network access to http.port to set the http port for external services. The default is 9200. The communication port discovery.zen.ping.timeout between transport.tcp.port cluster nodes sets the time for ES to automatically discover the connection timeout of nodes. The default is 3 seconds. If the network delay is high, you can set the minimum value of a larger number of discovery.zen.minimum_master_nodes master nodes. The formula for this value is: (master_eligible_nodes / 2) + 1. For example, if there are 3 master nodes that meet the requirements, set it to 2 here.
Modify file permissions:
Leyou wants to own (have) elasticsearch this folder permission-R is recursively granted permission
Chown leyou:leyou elasticsearch/-R
Running
Go to the elasticsearch/bin directory and you can see the following execution file:
Then enter the command:
. / elasticsearch
Found that there was an error and failed to start:
Error 1: kernel is too low
We are using CentOS6, which has a linux kernel version of 2.6. The plug-in for Elasticsearch requires at least version 3.5 or above. But it doesn't matter, we can just disable this plug-in.
Modify the elasticsearch.yml file and add the following configuration at the bottom:
Bootstrap.system_call_filter: false
And then restart.
Error 2: insufficient file permissions
Start it again, and make another mistake:
[1]: max file descriptors [4096] for elasticsearch process likely too low, increase to at least [65536]
We are using the leyou user instead of root, so the file permissions are insufficient.
First log in with the root user.
Then modify the configuration file:
Vim / etc/security/limits.conf adds the following: * soft nofile 65536 * hard nofile 131072 * soft nproc 4096 * hard nproc 4096
Error 3: insufficient number of threads
In the error report just now, there is another line:
[1]: max number of threads [1024] for user [leyou] is too low, increase to at least [4096]
This is because there are not enough threads.
Continue to modify the configuration:
Vim / etc/security/limits.d/90-nproc.conf modifies the following: * soft nproc 1024 is changed to: * soft nproc 4096
Error 4: process virtual memory
[3]: max virtual memory areas vm.max_map_count [65530] likely too low, increase to at least [262144]
Vm.max_map_count: limit the number of VMA (virtual memory areas) that a process can have, and continue to modify the configuration file,:
Vim / etc/sysctl.conf add the following: vm.max_map_count=655360 and execute the command: sysctl-p
Restart the terminal window
After all errors have been corrected, be sure to restart your X shell terminal, otherwise the configuration is invalid.
Start
Start it again, and it finally works!
You can see that two ports are bound:
9300: communication interface between cluster nodes
9200: client access interface
We visit: http://192.168.56.101:9200 in the browser
Install kibana
1.4.1. What is Kibana?
Kibana is a Node.js-based Elasticsearch index database data statistics tool, which can use the aggregation function of Elasticsearch to generate a variety of charts, such as column chart, line chart, pie chart and so on.
It also provides a console for manipulating Elasticsearch index data, and provides some API hints, which is very helpful for us to learn the syntax of Elasticsearch.
Installation
Because Kibana depends on node, our virtual machine does not have node installed, but it has been installed in window. So we chose to use kibana under window.
The latest version is consistent with elasticsearch, also 6.3.0
Just unzip it to a specific directory.
Configuration and operation
Go to the config directory under the installation directory and modify the kibana.yml file:
Modify the address of the elasticsearch server:
Elasticsearch.url: "http://192.168.56.101:9200"
Running
Go to the bin directory under the installation directory:
Double-click to run:
It is found that the listening port of kibana is 5601.
We visit: http://127.0.0.1:5601
Console
Select the DevTools menu on the left to go to the console page:
On the right side of the page, we can enter a request to access Elasticsearch.
Install the ik word splitter
Lucene's IK participle has not been maintained as early as 2012, and now we need to use the upgraded version based on it and developed as an integrated plug-in for ElasticSearch, maintaining and upgrading with Elasticsearch, and keeping the same version, the latest version: 6.3.0
Installation
Upload the zip package in the material and decompress it to the plugins directory of the Elasticsearch directory:
Use the unzip command to extract:
Unzip elasticsearch-analysis-ik-6.3.0.zip-d ik-analyzer
Then restart elasticsearch:
test
Let's ignore the grammar, let's test the wave first.
Enter the following request in the kibana console:
POST _ analyze {"analyzer": "ik_max_word", "text": "I am Chinese"}
Run to get the result:
{"tokens": [{"token": "I", "start_offset": 0, "end_offset": 1, "type": "CN_CHAR", "position": 0}, {"token": "Yes", "start_offset": 1, "end_offset": 2, "type": "CN_CHAR" "position": 1}, {"token": "Chinese", "start_offset": 2, "end_offset": 5, "type": "CN_WORD", "position": 2}, {"token": "China", "start_offset": 2, "end_offset": 4, "type": "CN_WORD" "position": 3}, {"token": "Chinese", "start_offset": 3, "end_offset": 5, "type": "CN_WORD", "position": 4}]} these are the installation and use of Elasticsearch The editor believes that there are some knowledge points that we may see or use in our daily work. I hope you can learn more from this article. For more details, please follow the industry information channel.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.