In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-02-25 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)06/01 Report--
This article is about how to install and deploy a distributed full-text search engine under Linux. The editor thinks it is very practical, so share it with you as a reference and follow the editor to have a look.
Install Elasticsearch
Installing Elasticsearch on Ubuntu is very simple. We will enable the Elasticsearch repository, import the repository GPG key, and then install the Elasticsearch server.
The Elasticsearch package comes with a bundled version of OpenJDK, so you don't have to install Java.
First, update the package index and install the dependencies needed to add the new HTTPS repository:
Linuxmi@linuxmi:~/www.linuxmi.com$ sudo apt update linuxmi@linuxmi:~/www.linuxmi.com$ sudo apt install apt-transport-https ca-certificates wget
Import the GPG key for the repository:
Linuxmi@linuxmi:~/www.linuxmi.com$ wget-qO-https://artifacts.elastic.co/GPG-KEY-elasticsearch | sudo apt-key add-
The above command should output OK, which means that the key has been successfully imported and that packages from this repository will be considered trusted packages.
Next, add the Elasticsearch repository to the system by issuing the following command:
Linuxmi@linuxmi:~/www.linuxmi.com$ sudo sh-c 'echo "deb https://artifacts.elastic.co/packages/7.x/apt stable main" > / etc/apt/sources.list.d/elastic-7.x.list'
If you are installing an earlier version of Elasticsearch, 7.x change the desired version in the command above.
After enabling the repository, install Elasticsearch by entering the following command:
Linuxmi@linuxmi:~/www.linuxmi.com$ sudo apt update
Linuxmi@linuxmi:~/www.linuxmi.com$ sudo apt install elasticsearch
The Elasticsearch service will not start automatically after the installation process is complete. To start a service and enable it to run:
Linuxmi@linuxmi:~/www.linuxmi.com$ sudo systemctl enable-now elasticsearch.service Synchronizing state of elasticsearch.service with SysV service script with / lib/systemd/systemd-sysv-install. Executing: / lib/systemd/systemd-sysv-install enable elasticsearch Created symlink / etc/systemd/system/multi-user.target.wants/elasticsearch.service → / lib/systemd/system/elasticsearch.service.
To verify that Elasticsearch is running, use curl to send the HTTP request to port 9200 on localhost:
Linuxmi@linuxmi:~/www.linuxmi.com$ curl-X GET "localhost:9200/"
You should see something similar to the following:
{"name": "linuxmi", "cluster_name": "elasticsearch", "cluster_uuid": "VnSPAJorQXiyYUTtCzoEQQ", "version": {"number": "7.8.1", "build_flavor": "default", "build_type": "deb", "build_hash": "b5ca9c58fb664ca8bf9e4057fc229b3396bf3a89", "build_date": "2020-07-21T16:40:44.668009Z" "build_snapshot": false, "lucene_version": "8.5.1", "minimum_wire_compatibility_version": "6.8.0", "minimum_index_compatibility_version": "6.0.0-beta1"}, "tagline": "You Know, for Search"}
It may take 5 to 10 seconds for the service to start. If you see curl: (7) Failed to connect to localhost port 9200: Connection refused, wait a few seconds and try again.
To view messages logged by the Elasticsearch service, use the following command:
Linuxmi@linuxmi:~/www.linuxmi.com$ sudo journalctl-u elasticsearch
[sudo] password of linuxmi:
-- Logs begin at Thu 2020-05-28 14:51:20 CST, end at Thu 2020-07-30 04:03:45 CS > July 30 03:43:33 linuxmi systemd [1]: Starting Elasticsearch... 03:44:30 on July 30th linuxmi systemd [1]: Started Elasticsearch.
OK, that's it. Elasticsearch is installed on your Ubuntu server.
Configure Elasticsearch
Elasticsearch data is stored in the / var/lib/Elasticsearch directory. The configuration file is located in / etc/elasticsearch, and the Java startup options can be configured in the / etc/default/elasticsearch file.
By default, Elasticsearch is configured to listen only on the local host. If the client connected to the database is also running on the same host, and you are setting up a single-node cluster, you do not need to change the default profile.
Remote access
Out-of-the-box Elasticsearch does not implement authentication, so anyone who can access HTTP API can access it.
To allow remote access to your Elasticsearch server, you will need to configure a firewall and open TCP port 6379.
Typically, you only want to allow access to the Redis server from a specific IP address or IP range. For example, to allow only connections from the 192.168.135.0 Universe 24 subnet, run the following command:
Sudo ufw allow proto tcp from 192.168.135.0/24 to any port 6379
Once the firewall is configured, the next step is to edit the Elasticsearch configuration and allow Elasticsearch to listen for external connections.
To do this, open the elasticsearch.yml configuration file:
Linuxmi@linuxmi:~/www.linuxmi.com$ sudo nano / etc/elasticsearch/elasticsearch.yml
Search for the included line network.host, uncomment, and change the value to 0.0.0.0:
Network.host: 0.0.0.0
If you have more than one network interface on your computer, specify the interface IP address to force Elasticsearch to listen on only the given interface.
Restart the Elasticsearch service for the changes to take effect:
Linuxmi@linuxmi:~/www.linuxmi.com$ sudo systemctl restart elasticsearch
OK . You can now connect to the Elasticsearch server from a remote location.
Thank you for reading! This is the end of the article on "how to install and deploy a distributed full-text search engine under Linux". I hope the above content can be of some help to you, so that you can learn more knowledge. if you think the article is good, you can share it out for more people to see!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.