In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-04-01 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)05/31 Report--
This article mainly introduces the example analysis of distributed search elasticsearch configuration file, the article is very detailed, has a certain reference value, interested friends must read it!
There are two configuration files in the config folder of elasticsearch: elasticsearch.yml and logging.yml, the first is the basic configuration file of es, and the second is the log configuration file. Es also uses log4j to log, so the settings in logging.yml can be set according to the normal log4j configuration file. The following is mainly about what can be configured in the elasticsearch.yml file. Cluster.name: elasticsearch configures the cluster name of es. By default, elasticsearch,es will automatically discover es under the same IP address range. If there are multiple clusters under the same IP address range, you can use this attribute to distinguish different clusters. Node.name: "Franz Kafka" node name, which is randomly assigned by default to a name in the name list, which is in the name.txt file in the config folder in es's jar package, with many interesting names added by the author. Node.master: true specifies whether the node is eligible to be elected as node. By default, true,es is the first machine in the cluster to be master. If this machine crashes, master will be re-elected. Node.data: true specifies whether the node stores index data, which defaults to true. Index.number_of_shards: 5 sets the default number of index shards, which defaults to 5. Index.number_of_replicas: 1 sets the default number of index copies, which defaults to 1 copy. Path.conf: / path/to/conf sets the storage path for the configuration file, which defaults to the config folder under the es root directory. Path.data: / path/to/data sets the storage path for index data. The default is the data folder under the es root directory. You can set multiple storage paths separated by commas, for example: path.data: / path/to/data1,/path/to/data2path.work: / path/to/work sets the storage path for temporary files. The default is the work folder under the es root directory. Path.logs: / path/to/logs sets the storage path of log files. By default, the storage path of the plug-in is set to the logs folder path.plugins: / path/to/plugins under the es root directory. By default, the plugins folder bootstrap.mlockall: true under the es root directory is set to true to lock memory. Because es is less efficient when jvm starts swapping, to ensure that it is not swap, you can set the ES_MIN_MEM and ES_MAX_MEM environment variables to the same value, and ensure that the machine has enough memory allocated to es. At the same time, the process of elasticsearch should be allowed to lock the memory. Under linux, you can use the command `ulimit-l unlocked ted`. Network.bind_host: 192.168.0.1 sets the bound ip address, which can be ipv4 or ipv6. The default is 0.0.0.0. Network.publish_host: 192.168.0.1 sets the ip address for other nodes to interact with this node. If it is not set, it will automatically determine, and the value must be a real ip address. Network.host: 192.168.0.1 this parameter is used to set both bind_host and publish_host parameters. Transport.tcp.port: 9300 sets the tcp port for interaction between nodes. The default is 9300. Transport.tcp.compress: true sets whether to compress the data transferred by tcp. It defaults to false and does not compress. Http.port: 9200 sets the http port for external services. The default is 9200. Http.max_content_length: 100mb sets the maximum capacity of content. Default 100mbhttp.enabled: whether false uses http protocol to provide services. Default is true. Enable. Gateway.type: the type of localgateway. The default is local, which is the local file system. It can be set to the local file system, distributed file system, hadoop's HDFS, and amazon's S3 server. The setting methods of other file systems will be described in detail next time. Gateway.recover_after_nodes: 1 sets the N nodes in the cluster to perform data recovery when they are started. The default is 1. Gateway.recover_after_time: 5m sets the timeout for initializing the data recovery process. The default is 5 minutes. Gateway.expected_nodes: 2 sets the number of nodes in this cluster. The default is 2. Once these N nodes are started, the data will be recovered immediately. Cluster.routing.allocation.node_initial_primaries_recoveries: 4 when initializing data recovery, the number of concurrent recovery threads is 4 by default. Cluster.routing.allocation.node_concurrent_recoveries: 2 the number of concurrent recovery threads when adding, deleting nodes or load balancers. Default is 4. Indices.recovery.max_size_per_sec: 0 sets the limited bandwidth for data recovery. If you enter 100mb, the default is 0, that is, there is no limit. Indices.recovery.concurrent_streams: 5 set this parameter to limit the maximum number of concurrent streams that can be opened at the same time when recovering data from other shards. The default is 5. Discovery.zen.minimum_master_nodes: 1 set this parameter to ensure that the nodes in the cluster know about the other N master qualified nodes. The default is 1. For large clusters, you can set a larger value (2-4) discovery.zen.ping.timeout: 3s to set the ping connection timeout when other nodes are automatically found in the cluster. The default is 3 seconds. For poor network environments, you can set a higher value to prevent errors during automatic discovery. Discovery.zen.ping.multicast.enabled: false sets whether to turn on the multicast discovery node. The default is true. Discovery.zen.ping.unicast.hosts: ["host1", "host2:port", "host3 [portX-portY]] sets the initial list of master nodes in the cluster, which can be used to automatically discover new nodes to join the cluster. Here are some slow log parameter settings index.search.slowlog.level: TRACEindex.search.slowlog.threshold.query.warn: 10sindex.search.slowlog.threshold.query.info: 5sindex.search.slowlog.threshold.query.debug: 2sindex.search.slowlog.threshold.query.trace: 500msindex.search.slowlog.threshold.fetch.warn: 1sindex.search.slowlog.threshold.fetch.info: 800msindex.search.slowlog.threshold.fetch.debug:500msindex.search.slowlog.threshold.fetch.trace: 200ms above is all the content of the article "sample Analysis of distributed search elasticsearch profile" Thank you for reading! Hope to share the content to help you, more related knowledge, welcome to follow the industry information channel!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.