In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-03-14 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)06/02 Report--
1. First, go to the elasticsearch official website to download the package version 1.7.0.
# wget https://download.elastic.co/elasticsearch/elasticsearch/elasticsearch-1.7.0.tar.gz
2. Decompress elasticsearch-1.7.0.tar.gz software package.
# tar zxf elasticsearch-1.7.0.tar.gz
3. Explain the parameters of es configuration file (the real configuration is not fully used):
# the cluster name identifies your cluster, which will be used for automatic exploration.
If you run multiple clusters on the same network, make sure your cluster name is unique.
Cluster.name: test-elasticsearch
# Node names are automatically generated at startup, so you don't have to configure them manually. You can also assign a specific name to the node.
Node.name: "elsearch2"
# allow this node to be elected as a master node (default is allowed)
# node.master: true
# allow this node to store data (default is allowed)
# node.data: true
# You can exploit these settings to design advanced cluster topologies.
# you can use these settings to design advanced cluster topologies
#
# 1. You want this node to never become a master node, only to hold data.
# This will be the "workhorse" of your cluster.
# 1. You don't want this node to be a master node, you just want to store data.
# this node will become the "loader" of your cluster
#
# node.master: false# node.data: true
# You want this node to only serve as a master: to not store any data and
# to have free resources. This will be the "coordinator" of your cluster.
# 2. You want this node to be a master node, not to store any data, and to have free resources.
# this node will become the "coordinator" in your cluster
# node.master: true# node.data: false
# Use the Cluster Health API [http://localhost:9200/_cluster/health], the
# Node Info API [http://localhost:9200/_nodes] or GUI tools
# use cluster physical examination API [http://localhost:9200/_cluster/health]]
# Node information API [http://localhost:9200/_cluster/nodes] or GUI tools for example:
# A node can have generic attributes associated with it, which can later be used
# for customized shard allocation filtering, or allocation awareness. An attribute
# is a simple key value pair, similar to node.key: value, here is an example:
# A node can come with some common attributes, which can be used later in custom shard allocation filtering or allocation awareness.
# an attribute is a simple key-value pair, similar to node.key: value. Here is an example:
# node.rack: rack314
# By default, multiple nodes are allowed to start from the same installation location
# to disable it, set the following:
By default, multiple nodes are allowed to start from the same installation location. To disable this feature, configure it as follows:
# node.max_local_storage_nodes: 1
# Set the number of shards (splits) of an index (5 by default):
# set the number of shards for an index (default is 5)
# index.number_of_shards: 5
# Set the number of replicas (additional copies) of an index (1 by default):
# set the number of copies of an index (default is 1)
# index.number_of_replicas: 1
# Note, that for development on a local machine, with small indices, it usually
# makes sense to "disable" the distributed features:
# Note that it is reasonable to disable the distributed feature in order to develop on the local machine using small indexes.
# index.number_of_shards: 1# index.number_of_replicas: 0
# Path to directory containing configuration (this file and logging.yml):
# the path to the directory that contains the configuration (this file and logging.yml)
# path.conf: / path/to/conf
# Path to directory where to store index data allocated for this node.
# path to the directory where the index data of this node is stored
# path.data: / path/to/data
# Can optionally include more than one location, causing data to be striped across
# the locations (a la RAID 0) on a file level, favouring locations with most free
# space on creation. For example:
# you can include more than one location at will, so that the data will span multiple locations at the file layer (a la RAID 0) and will be created.
# give priority to the location of large remaining space
# path.data: / path/to/data1,/path/to/data2
# Path to temporary files:
# path to temporary files
# path.work: / path/to/work
# Path to log files:
# path to the log file
# path.logs: / path/to/logs
# Path to where plugins are installed:
# plug-in installation path
# path.plugins: / path/to/plugins
# If a plugin listed here is not installed for current node, the node will not start.
# if the current node does not install the plug-ins listed below, the node will not start
# plugin.mandatory: mapper-p_w_uploads,lang-groovy
# ElasticSearch performs poorly when JVM starts swapping: you should ensure that
# it _ never_ swaps.
# ElasticSearch performance will be slow when JVM starts swapping, you should make sure that it will not change pages
# Set this property to true to lock the memory:
# set this property to true to lock memory
# bootstrap.mlockall: true
# Make sure that the ES_MIN_MEM and ES_MAX_MEM environment variables are set
# to the same value, and that the machine has enough memory to allocate
# for ElasticSearch, leaving enough memory for the operating system itself.
# make sure that the ES_MIN_MEM and ES_MAX_MEM environment variables are set to the same value, and that the machine has enough memory to allocate
# give it to ElasticSearch and reserve enough memory for the operating system
# You should also make sure that the ElasticSearch process is allowed to lock
# the memory, eg. By using `ulimit-l unmanned.
# you should make sure that ElasticSearch processes can lock memory, for example, using `memory-l unlocked ted`
# ElasticSearch, by default, binds itself to the 0.0.0.0 address, and listens
# on port [9200-9300] for HTTP traffic and on port [9300-9400] for node-to-node
# communication. (the range means that if the port is busy, it will automatically
# try the next port).
# default ElasticSearch binds itself to 0.0.0.0 address, and the listening port of HTTP transmission is between [9200-9300].
The port of # communication is in [9300-9400]. (range means that if a port is already occupied, it will automatically try the next port.)
# Set the bind address specifically (IPv4 or IPv6):
# set a specific binding address (IPv4 or IPv6):
# network.bind_host: 192.168.0.1
# Set the address other nodes will use to communicate with this node. If not
# set, it is automatically derived. It must point to an actual IP address.
# set the address that other nodes use to communicate with this node. If it is not set, it will be obtained automatically.
# must be a real IP address.
# network.publish_host: 192.168.0.1
# Set both 'bind_host' and' publish_host':
# both 'bind_host' and' publish_host' are set
# network.host: 192.168.0.1
# Set a custom port for the node to node communication (9300 by default):
# set a custom port for communication between nodes (default is 9300)
# transport.tcp.port: 9300
# Enable compression for all communication between nodes (disabled by default):
# enable compression for communication between all nodes (disabled by default)
# transport.tcp.compress: true
# Set a custom port to listen for HTTP traffic:
# set a custom port that listens to HTTP transmission
# http.port: 9200
# Set a custom allowed content length:
# set a custom allowed content length
# http.max_content_length: 100mb
# Disable HTTP completely:
# completely disable HTTP
# http.enabled: false
3. Operating system configuration
1. File descriptor
Vim / etc/security/limits.conf add * soft nofile 655350 * hard nofile 655350
Exit the current user to re-login will take effect, using ulimit-n verification.
two。 Maximum number of memory mapped areas, disable swap swap partition
Vim / etc/sysctl.conf added sysctl-p after vm.max_map_count=262144vm.swappiness=1 modification
Jvm parameter configuration
There is an elasticsearch.in.sh file in the bin directory of ES_HOME, modify
ES_MIN_MEM=256m ES_MAX_MEM=1g
For the appropriate value
4. Plug-in installation of es:
Marvel is a management and monitoring tool for Elasticsearch and is free for development. It comes with an interactive console called Sense, which makes it easy to interact with Elasticsearch directly through a browser.
Marvel is a plug-in that runs the following code in the Elasticsearch directory to download and install:
. / plugin-I elasticsearch/marvel/latest
Elasticsearch-head is an elasticsearch cluster management tool. It is a stand-alone web program written entirely by html5. You can integrate it into es through plug-ins.
#. / plugin-install mobz/elasticsearch-head
Address: http://172.16.2.24:25556/_plugin/head/
Elasticsearch plug-in bigdesk installation:
Bigdesk is a cluster monitoring tool of elasticsearch, which can be used to view various states of es clusters, such as cpu, memory usage, index data, search, number of http connections, and so on.
From the cmd command line, enter the installation directory, then change to the bin directory, and run the following command:
#. / plugin-install lukas-vlcek/bigdesk
Enter: http://172.16.2.24:25556/_plugin/bigdesk in the browser to see the effect
Note: the installation of elasticsearch participle ik, if you do not install the participle ik plug-in, can not build the index at all, and let access to the http://172.16.2.24:25556/_plugin/head/ cluster blank, click web to create the index page does not respond.
Note: github https://github.com/medcl/elasticsearch-analysis-ik gives the corresponding ik version of es, and the corresponding 1.2.6 version of es 1.7.0. At the beginning, my ik with 1.8 failed to create the index, and the ik error was also reported at the backend.
Download the ik:1.2.6 version: https://github.com/medcl/elasticsearch-analysis-ik/releases?after=v1.6.1
Installation operation:
Download the zip package and extract it to a directory:
# unzip elasticsearch-analysis-ik-master.zip
Install the mavne environment, and set the environment variables for downloading the package on the apache official website:
# export PATH=$PATH:/usr/local/maven/bin
Because it is the source code, you need to package it with maven, enter the decompressed folder, and execute the command:
# cd elasticsearch-analysis-ik-master#mvn clean package
# create an ik directory under the plugin directory of es, and elasticsearch-analysis-ik-1.2.6.jar copy under the target directory to the ik directory.
[root@localhost target] # cd / data/elasticsearch-1.7.0 [root@localhost elasticsearch-1.7.0] # lsbin config data lib LICENSE.txt logs NOTICE.txt plugins README.textile [root@localhost elasticsearch-1.7.0] # cd plugins/ [root@localhost plugins] # lsbigdesk head ik marvel [root@localhost plugins] # cd ik/ [root@localhost ik] # lselasticsearch-analysis-ik-1.2.6.jar
Note: if you are in a cluster, you can copy the jar to several other machines.
The following line needs to be added to the es configuration file:
Index: analysis: analyzer: ik: alias: [ik_analyzer] type: org.elasticsearch.index.analysis.IkAnalyzerProvider ik_max_word: type: ik use_smart: false ik_smart: type: ik use_smart: truemarvel.agent.enabled: false
The complete es configuration file is as follows, three sets of the same configuration, except hostip and node.name.
# cat elasticsearch.ymlcluster.name: test-es-clusternetwork.host: 172.16.2.24node.name: "node24" discovery.zen.ping.unicast.hosts: ["172.16.2.24 virtual 25555", "172.16.2.21 virtual 25555" Index.number_of_shards: 5discovery.zen.minimum_master_nodes: 2script.groovy.sandbox.enabled: falsetransport.tcp.port: 25555http.port: 25556script.inline: offscript.indexed: offscript.file: offindex: analysis: analyzer: ik: alias: [ik_analyzer] type: org.elasticsearch.index.analysis.IkAnalyzerProvider ik_max_ Word: type: ik use_smart: false ik_smart: type: ik use_smart: truemarvel.agent.enabled: false
Start the es service at the background:
[root@localhost bin] # pwd/data/elasticsearch-1.7.0/bin [root@localhost bin] #. / elasticsearch- d
Find one of the three cluster machines to create an index:
Create an index:
Curl-X PUT 'http://172.16.2.24:25556/index'{"acknowledged":true}
Note: the returned result is acknowledged ": true is successful.
Access through the browser: http://172.16.2.24:25556/_plugin/head/ test results.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.