In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-02-28 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >
Share
Shulou(Shulou.com)06/02 Report--
This article mainly explains "ElasticSearch7 configuration file method", interested friends may wish to take a look. The method introduced in this paper is simple, fast and practical. Let Xiaobian take you to learn "ElasticSearch7 configuration file method"!
Configuration items, select on demand
cluster.name elasticsearch#The configured cluster name is elasticsearch by default. ES service automatically connects ES service under the same network segment through broadcast mode, and communicates through multicast mode. There can be multiple clusters under the same network segment. Different clusters are distinguished by cluster name. node.name: "Franz Kafka"#The node name of the machine where the current configuration is located. If you do not set it, you will randomly specify a name in the name list by default. The name list is in the name.txt file in the config folder of the jar package of es, where there are many interesting names added by the author. node.master: true Specifies whether the node is eligible to be elected as node (note that this is only set to eligible, does not mean that the node must be master), the default is true, es is the default cluster of the first machine as master, if this machine hangs will be re-elected master. node.data: true#Specifies whether the node stores index data, default is true. index.number_of_shards: 5#Sets the default number of index shards, which defaults to 5 shards. index.number_of_replicas: 1#Set the default number of index replicas, the default is 1 replica. If you use the default settings and your cluster is configured with only one machine, then the cluster health is yellow, meaning that all data is available, but some replicas are not assigned #(health available curl 'localhost:9200/_cat/health? v'View, green, yellow or red. Green means everything is OK and the cluster is fully functional, yellow means all data is available but some replicas are not allocated, and red means some data is unavailable for some reason). path.conf: /path/to/conf#Sets the path where configuration files are stored. By default, it is the config folder under the es root directory. path.data: /path/to/data#Set the storage path of index data, the default is the data folder under the es root directory, multiple storage paths can be set, separated by commas, for example: #path.data:/path/to/data1,/path/to/data2path.work: /path/to/work#Set the storage path of temporary files, the default is the work folder under the es root directory. path.logs: /path/to/logs#Set the storage path of log files, default is logs folder under es root directory path.plugins: /path/to/plugins#Set the storage path of plugins, default is plugins folder under es root directory, plugins are commonly used in es to enhance the core functions of the original system. bootstrap.mlockall: true#Set to true to lock memory from swapping. Because es becomes less efficient when jvm starts swapping, to make sure it doesn't swap, set the ES_MIN_MEM and ES_MAX_MEM environment variables to the same value, and make sure the machine has enough memory allocated to es. Also allow elasticsearch process to lock # #memory, which can be set by `ulimit -l unlimited` command before starting es under linux. network.bind_host: 192.168.0.1# Set the IP address to bind, which can be ipv4 or ipv6, default is 0.0.0.0, bind any IP of this machine. network.publish_host: 192.168.0.1# Set the IP address of other nodes interacting with this node. If it is not set, it will automatically determine that the value must be a real IP address. network.host: 192.168.0.1# This parameter is used to set both bind_host and publish_host. transport.tcp.port: 9300#Sets the tcp port for interaction between nodes. The default is 9300. transport.tcp.compress: true#Sets whether to compress data during tcp transmission, default is false, no compression. http.port: 9200#Set the http port for external services, default to 9200. http.max_content_length: 100mb#Set the maximum content capacity, default 100mbhttp.enabled: false#Whether to use http protocol to provide services to the outside world, default is true, enabled. gateway.type: local#The type of gateway, default is local, that is, local file system, can be set to local file system, distributed file system, hadoop HDFS, and amazon s3 server. gateway.recover_after_nodes: 1#Set data recovery when N nodes in the cluster are started. Default is 1. gateway.recover_after_time: 5m#Sets the timeout for initializing the data recovery process, which defaults to 5 minutes. gateway.expected_nodes: 2#Set the number of nodes in the cluster, default is 2, once these N nodes are started, the data will be restored immediately. cluster.routing.allocation.node_initial_primaries_recoveries: 4#Number of concurrent recovery threads when initializing data recovery, default is 4. cluster.routing.allocation.node_concurrent_recoveries: 2#Number of concurrent recovery threads when adding deleted nodes or Load Balancer. Default is 4. indices.recovery.max_size_per_sec: 0#Sets the bandwidth limit for data recovery, e.g. 100mb, default to 0, i.e. unlimited. indices.recovery.concurrent_streams: 5#Set this parameter to limit the maximum number of concurrent streams opened simultaneously when recovering data from other shards. The default value is 5. discovery.zen.minimum_master_nodes: 1#Set this parameter to ensure that nodes in the cluster know about the other N master-qualified nodes. The default value is 1. For large clusters, you can set a larger value (2-4) to discovery.zen.ping.timeout: 3s#to set the ping connection timeout when other nodes are automatically discovered in the cluster. The default value is 3 seconds. For poor network environments, you can set a higher value to prevent errors during automatic discovery. discovery.zen.ping.multicast.enabled: false#Sets whether to turn on multicast discovery nodes, default is true. discovery.zen.ping.unicast.hosts: ["host1", "host2:port", "host3 [portX-portY]"]#Sets the initial list of master nodes in the cluster, through which new nodes can be automatically discovered.
elasticsearch7 adds the following two configuration items: Cluster Coordination Subsystem
discovery.seed_hostcluster.initial_master_nodes Official documentation example: discovery.seed_hosts: - 192.168.1.10:9300 - 192.168.1.11 - seeds.mydomain.com cluster.initial_master_nodes: - master-node-a - master-node-b - master-node-c pseudo-distributed cluster building
The broadcast and unicast mechanisms of elasticsearch should adopt unicast mode in production environment, so only configuring network.host cannot discover other nodes in multi-machine cluster environment, and network.publish_host must be configured.
Here, yml+docker-compose is used to complete the construction of pseudo-distributed clusters. A true distributed cluster installation requires only minor modifications and will not be described here.
Note: Modify/etc/sysctl.conf on [Host] to add vm.max_map_count=262144. Start sysctl -p
master elasticsearch.yml
cluster.name: docker-clusternode.name: masternode.master: truenode.data: truenetwork.host: 0.0.0network.publish_host: 192.168.31.45 #This is my intranet ipcluster.initial_master_nodes: - masterhttp.cors.enabled: truehttp. cors.allow-origin: "*"
master docker-compose.yml
version: '3.7'services: es: image: docker.elastic.co/elasticsearch/elasticsearch:7.1.1 container_name: master environment: - "ES_JAVA_OPTS=-Xms512m -Xmx512m" volumes: - esdata:/usr/share/elasticsearch/data - ./ elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml ports: - 9200:9200 - 9300:9300volumes: esdata:
slave elasticsearch.yml
cluster.name: docker-clusternode.name: slavenode.master: falsenode.data: truenetwork.host: 0.0.0.0network.publish_host: 192.168.31.45http.port: 9201transport.tcp.port: 9301discovery.seed_hosts: - 192.168.31.45:9300http.cors.enabled: truehttp.cors.allow-origin: "*"
slave docker-compose.yml
version: '3.7'services: es: image: docker.elastic.co/elasticsearch/elasticsearch:7.1.1 container_name: slave environment: - "ES_JAVA_OPTS=-Xms512m -Xmx512m" volumes: - esdata2:/usr/share/elasticsearch/data - ./ elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml ports: - 9201:9201 - 9301:9301volumes: esdata2: run elasticsearch-head for Elasticsearch 5.x: docker run -p 9100:9100 mobz/elasticsearch-head:5for Elasticsearch 2.x: docker run -p 9100:9100 mobz/elasticsearch-head:2for Elasticsearch 1.x: docker run -p 9100:9100 mobz/elasticsearch-head:1for fans of alpine there is mobz/elasticsearch-head:5-alpine http://localhost:9100/
Cluster building completed, the next is the word segmentation operation
At this point, I believe that everyone has a deeper understanding of the "ElasticSearch7 configuration file method", may wish to actually operate it! Here is the website, more related content can enter the relevant channels for inquiry, pay attention to us, continue to learn!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.