In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-04-05 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >
Share
Shulou(Shulou.com)06/01 Report--
In this issue, the editor will bring you about how to build an Elasticsearch environment. The article is rich in content and analyzes and narrates it from a professional point of view. I hope you can get something after reading this article.
Environmental preparation
Elasticsearch:7.9.3
JDK: 15.1 (although ES comes with JDK, it is recommended to use your own installation)
Kibana:7.9.3 (preferably corresponding to es)
CentOS: 7 (memory 2GB CPU two cores), three virtual machines (cannot afford cloud services)
Elasticsearch-head (optional)
The JDK version of Elasticsearch can be viewed here. JDK corresponding to 7.9.3 has been supported to JDK15, so after you choose to use the latest JDK15; and configure JDK, you can check the default garbage collector java-XX:+PrintCommandLineFlags-version.
System environment configuration
Sysctl.conf
Vm.max_map_count is configured too small es may not work, it needs to be modified according to the actual situation.
After the modification, you need to execute sysctl-p to make it take effect. You can check whether it takes effect by using the sysctl-a | grep vm.max_map_count command.
/ etc/security/limits.conf
Elasticsearch configuration
Elasticsearch.yml
# Cluster name cluster.name: Bellamy-cluster# node name node.name: bellamy-$ {HOSTNAME} # Storage address of configuration data and logs. By default, it is located in the application directory # path.data: / path/to/data#path.logs: / path/to/logs#network.host: how many are there in your IP# Set a custom port for HTTP:#http.port: 9200transport.port: 930cluster. Enter several. Discovery.seed_hosts with the same configuration: ["ip1:9300", "ip2:9300", "ip3:9300"] cluster.initial_master_nodes: ["ip1:9300", "ip2:9300", "ip3:9300"]
Jvm.options
# if the memory is not so large, configure 512m, which is less than or equal to 1x2 of the host, and the maximum is not more than or equal to 32GB-Xms512m-Xmx512m# 14. The default garbage collector supported is G1 14-:-XX:+UseG1GC14-:-XX:G1ReservePercent=2514-:-XX:InitiatingHeapOccupancyPercent=30.
Create a separate account for es and switch to the current user (required)
You need to create a separate account for es. If you run under the root account, the startup will report an error, and some files will be generated and need to be deleted. It is best to switch the account ahead of time after configuration.
Start
Execute. / elasticsearch (the executable is in the bin directory)
You can use the background mode:. / elasticsearch-d, here to make it easy to view the startup log, use. / elasticsearch
There may be other errors. Just make changes according to the log prompt.
Kibana configuration
Kibana.yml
# portserver.port: 5601server.name: "bellamy" # es address elasticsearch.hosts: ["http://ip1:9200"," http://ip2:9200", "http://ip3:9200"]# kibana itself does not store data, its metadata information is put on es, try not to change it, you can add a suffix, such as kibana.index:" .kibana-bellamy "start
. / kibana
Kibana is written by node, so you need to use the nohup command for background mode.
You can see our configuration information under the Stack Monitoring menu.
Elasticsearch-head (optional, used with kibana, you can easily view some information) # download https://github.com/mobz/elasticsearch-head.git# execute npm installation command npm install# start npm run start# access http://localhost:9100/ # you can connect an es address at will in the interface above is the editor for you to share how to build an Elasticsearch environment, if you happen to have similar doubts It may be understood with reference to the above analysis. If you want to know more about it, you are welcome to follow the industry information channel.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.