In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-17 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)06/02 Report--
Brief introduction
ElasticSearh is a popular full-text search engine at present. At present, more companies and individuals are used. It is developed in Java language based on RESTful web interface, which can achieve real-time search, stable, reliable, fast, easy to install and use. This article briefly describes how to install and configure on a linux system.
Installation environment 1. Operating system: CentOS 7.42. Prerequisites for installing Elasticsearch: JDK1.8 or above, the version I use here is jdk1.8.0_1813. The latest version of Elasticsearch is 7.5.1, and the version I use here is 5.2.2 installation and configuration.
Note: the operation of ElasticSearch can not be executed with root, it must be started by ordinary users.
1. Jdk installation
JDK installation
II. Elasticsearch installation
1. Create a user [root@test-01 ~] # groupadd Elastic [root @ test-01 ~] # useradd elastic-g elastic-m2. Download and install [root@test-01 ~] # wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-5.2.2.tar.gz[root@test-01 ~] # tar zxvf elasticsearch-5.2.2.tar.gz [root@test-01 ~] # mv elasticsearch-5.2.2 / usr/loca/elasticsearch// create data and log directory [root@test-01 ~] # mkdir-p / data/elasticsearch/da ta [root @ test-01 ~] # mkdir-p / data/elasticsearch/logs// add permission [root@test-01 ~] # chown elastic:elastic elasticsearch/ usr/loca/elasticsearch-R [root @ test-01 ~] # chown elastic:elastic elasticsearch/data / elasticsearch/logs-R [root @ test-01 ~] # chown elastic:elastic elasticsearch/data / elasticsearch/data-R3. Modify the configuration file (see below for the modification method) the cluster configuration file is slightly the same as [root@test-01 ~] # cat / usr/local/elasticsearch/config/elasticsearch.ym l egrep-v'^ (# | $) 'cluster.name: es_clusternode.name: node-01 / / Random definition node.master: true / / Master true slave falsenode.data: truepath.data: / data/elasticsearch/datapath.logs: / data/elasticsearch/logsnetwork.host: 192 .168.0.164http.port: 9200transport.tcp.port: 9300discovery.zen.ping.unicast.hosts: ["192.168.0.165" "192.168.0.164"] discovery.zen.minimum_master_nodes: 1xpack.security.enabled: truehttp.cors.enabled: truehttp.cors.allow-origin: "*" http.cors.allow-headers: Authorization,Content-Typexpack.security.authc:accept_default_password: true explanation: bootstrap.memory_lock: falsebootstrap.system_call_filter: false can tell at a glance that it is the configuration cluster.name cluster name for internal access. If the same name is used in the same cluster, node.name: node-01 node name node.master: whether it is the master machine of the cluster node.data: true is the data node network.host: 192.168.0.164 this does not have to configure the ip address naturally, but can also be configured as the 0.0.0.0http.port: 9200 port number. Default 9200discovery.zen.ping.unicast.hosts if not configured: ["192.168.0.165", "192.168.0.164"] this is needed when configuring the cluster. Enter the addresses of the ip of other clusters in the cluster in []. If it is master, please fill in the machine addresses of all salve and discovery.zen.minimum_master_nodes: 1 if you want to configure this value, please search it. Weigh the cluster yourself. Here I use three machines to simulate the cluster, so fill in 2. The setting of the parameter http.cors.enabled: true and the following configuration are related to the access policy of ip. If you find that other ip addresses cannot be accessed, this parameter is not configured 4. Adjust the system parameter [root@test-01 ~] # vim / etc/security/limits.confroot soft nofile 65535root hard nofile 65535 * soft nofile 65536 * hard nofile 131072 * soft nproc 2048 [root@test-01 ~] # vim / etc/sysctl.confvm.max_map_count=662144vm.overcommit_memory = 1 to execute the command to make the configuration file effective [root@test-01 ~] # sysctl-p5. Start and stop [root@test-01 ~] # su elastic-c "/ usr/local/elasticsearch/bin/elasticsearch-d" [root@test-01 ~] # kill-9 `ps aux | grep [e] lasticsearch | grep-v tail | awk'{print $2}'3. Install the head plug-in under / usr/local/elasticsearch/elasticsearch-head path. The installation method for node source code is as follows: [root@test-01 ~] # yum-y install gcc make gcc-c++ openssl-devel download source code and decompress: [root@test-01 ~] # wget http://nodejs.org/dist/v4.4.7/node-v4.4.7-linux-x64.tar.gz[root@test-01 ~] # tar zxvf node-v4.4.7-linux-x64.tar.gz [root@test-01 ~] # mv node- V4.4.7-linux-x64 / usr/local/ node [root @ test-01 ~] # ln-s / usr/local/node/bin/node / usr/local/bin/ node [root @ test-01 ~] # ln-s / usr/local/node/bin/npm / usr/local/bin/ NPM [root @ test-01 ~] # node-v2. Installing gruntgrunt is a convenient build tool for packaging, compression, testing, execution, and so on. The head plug-in in 5.2 is started through grunt. Therefore, you need to install grunt: [root @ test-01 ~] # git clone git://github.com/mobz/elasticsearch-head.git [root@test-01 ~] # cd elasticsearch-head [root@test-01 ~] # npm install-g grunt-cli / / after execution, it will generate a node_modules folder [root@test-01 ~] # npm install Note: above 5.0, elasticsearch-head cannot be placed in the plugins or modules directory of elasticsearch, otherwise elasticsearch startup will report an error. Modify vim Gruntfile.js file: add hostname attribute, set to *
3. Start grunt [root @ test-01 ~] # grunt server & you can start without installing grunt: [root ~] # npm run start &
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.