Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Introduction and installation of ElasticSearch

2025-01-15 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >

Share

Shulou(Shulou.com)06/03 Report--

    

1. Introduction of ElasticSearch (1) interesting history of ElasticSearch

   Shay Banon believes that his participation in Lucene is entirely accidental. When he was an unemployed engineer, he came to London with his new wife, who wanted to study as a cook in London and wanted to develop an app for his wife to easily search recipes, so he came to Lucene. Directly using Lucene to build search has a lot of problems, including a lot of repetitive work, so Shay continues to abstract on the basis of Lucene, making it easier for Java programs to embed search. After a period of polishing, his first open source work "Compass", which means "compass" in Chinese, was born. After that, Shay found a new job facing a high-performance distributed development environment. In his work, he gradually found that there was a growing need for an easy-to-use, high-performance, real-time, distributed search service, so he decided to rewrite Compass, turned it from a library into a separate server, and renamed it Elasticsearch.

(2) Overview of ElasticSearch

   ElasticSearch is an open source search engine based on Apache Lucene, which is written in Java and uses Lucene to build indexes and provide search functions. ElasticSearch's goal is to make full-text search simple, and developers can easily achieve search functions through its simple and straightforward RestFul API without having to face the complexity of Lucene. ES can easily carry out large-scale scale-out to support the processing of structured and unstructured massive data at PB level.

(3) comparison between ElasticSearch and solr

Interface:

  solr is similar to webserver structure

  elasticsearch is a rest-style access interface

Distributed:

  solr:solrCloud solr4.x support

  elasticsearch: created for distribution

Support style:

  solr:json 、 xml

  elasticsearch:json

(4) comparison between ElasticSearch and MySQL

2. Stand-alone installation of ElasticSearch (1) stand-alone installation

Precondition

Download place: https://github.com/elastic/elasticsearch

Note: before installing the cluster, make sure you have jdk and that it is 1.7 or above.

You cannot start ES as the root user, otherwise:

Installation steps

① decompression:

[hadoop hadoop03@~] $tar zxvf elasticsearch-6.2.0.tar.gz-C / application/

② modify configuration file

# / application/elasticsearch-6.2.0/config/elasticsearch.ymlcluster.name: zzy-application # name of the cluster node.name: node-1 # Node name path.data: / home/hadoop/data/elasticsearch-data # data storage directory path.logs: / home/hadoop/logs/elasticsearch-log # Log storage directory network.host: 192.168.191.130 # bind host

③ version compatibility issues

Requires kernel 3.5 + with CONFIG_SECCOMP and CONFIG_SECCOMP_FILTER

The kernel of centos6.x is too low. You need to centos7 or upgrade the kernel corresponding to centos6.x to 3.5 or above. Here you choose to upgrade the kernel corresponding to centos6.x.

# related operations:

[hadoop hadoop03@~] $more / etc/issue and uname-a # View linux kernel information

# upgrade the kernel

[hadoop hadoop03@~] $sudo rpm-- import https://www.elrepo.org/RPM-GPG-KEY-elrepo.org

If the following figure appears:

The https domain name cannot be accessed on the server using the curl command because the nss version is a bit old:

You can use:

[hadoop hadoop03@~] $yum-y update nss

# install the kernel

Sudo yum-- enablerepo=elrepo-kernel × × × tall kernel-lt-y

# Edit grub.conf file and modify Grub boot sequence

# if the following error occurs:

Max file descriptors [4096] for elasticsearch process is too low, increase to at least [65536]

# limits.conf

Sudo vim / etc/security/limits.conf adds the following:

* soft nofile 65536 * hard nofile 131072 * soft nproc 2048 * hard nproc 4096

And note:

If there are any more errors:

Max number of threads [1024] for user [bigdata] is too low, increase to at least [4096]

# modify configuration file 90-nproc.conf

Sudo vim / etc/security/limits.d/90-nproc.conf

# there will be mistakes next

Max virtual memory areas vm.max_map_count [65530] is too low, increase to at least [262144]

# modify configuration file / etc/sysctl.conf

And effective: sudo sysctl-p

# Last error:

System call filters failed to × × tall; check the logs and fix your configuration or disable system call filters at your own risk

This is because Centos6 does not support SecComp, and the default bootstrap.system_call_filter of ES5.2.0 is true for detection, so the detection fails. After the failure, the ES cannot be started directly.

# modify elasticsearch configuration file

Vim / application/elasticsearch-6.2.0/config/elasticsearch.yml

# under Memory bootstrap.memory_lock: falsebootstrap.system_call_filter: false

④ restart the computer

⑤ starts ES

[hadoop hadoop03@~] $/ application/elasticsearch-6.2.0/bin/elasticsearch-d

Finally, it appears in the web interface of http://hadoop03:9200:

Indicates that the installation is successful ~!

3. Cluster installation of ElasticSearch

It should be noted here that if you are installing a cluster, if each of your Linux has the above problems, then every computer needs to upgrade the kernel and make sure that all nodes can successfully start the stand-alone ES.

   how your node can successfully install stand-alone ES, then the installation of the cluster is very simple: as long as the nodes belong to the same LAN and the same network segment, and the cluster name is the same, ES will automatically discover other nodes.

① sends the stand-alone version of ES to each node:

[hadoop hadoop03@application] $scp-r elasticsearch-6.2.0 hadoop01:$PWD [hadoop hadoop03@application] $scp-r elasticsearch-6.2.0 hadoop02:$PWD

② modifies the configuration file:

# Node 1 hadoop01:

Cluster.name: zzy-application

Http.port: 9200

Network.host: 0.0.0.0

# Node 2 hadoop 02:

Cluster.name: zzy-application

Http.port: 9200

Network.host: 0.0.0.0

Transport.tcp.port: 19300

# Node 3:

Cluster.name: bigdata

Http.port: 9200

Network.host: 0.0.0.0

Transport.tcp.port: 29300

After the ③ configuration is complete, start ES:

You can view the cluster information through the ES plug-in elasticsearch-head:

Here elasticsearch-head is a plug-in for Google, which needs to be downloaded. Here, the editor provides you with one directly, which can be placed in the extension of the Google browser.

Download address: http://down.51cto.com/data/2458080

Note: here is a cluster implemented through different ports of one machine. If there are multiple machines, you need to add it to the configuration file of each node:

Discovery.zen.ping.unicast.hosts: ["nodeIP:9300", "nodeIP:9300"]

This is because ES cluster is an automatic discovery mechanism. Here we provide a list of discoveries. As long as the clusterName is the same and under the same network, ES nodes with the same cluster name will automatically form a cluster.

The discovery of this dependency is more reliable, of course, if you want to cluster expansion is relatively slow.

4. Elasticsearch Kibana

   kibana is essentially an elasticsearch web client, an analytical and visual elasticsearch platform that searches, views, and interacts with indexes stored in elasticsearch through kibana. It is convenient to perform advanced data analysis and visualize data in various formats, such as charts, tables, maps and so on.

  . 1 simple deployment:

① download

URL: http://www.elastic.co/downloads/kibana

Special attention should be paid to the version matching between kibana and ES. Kibana-6.2.0-linux-x86_64.tar is used here

② configuration:

Decompress: [hadoop@hadoop03 ~] $tar zxvf kibana-6.2.0-linux-x86_64.tar.gz-C / application/

Configuration file: [hadoop@hadoop03 config] $vim kibana.yml

Note: since ES cannot be started through root users, the logs directory needs to be created by the user who started ES:

# start at backend (under bin):

Nohup bin/kibana > logs/kibana.log 2 > & 1 &

③ test

This page appears when accessing port 5601 of the deployment Kibana machine, indicating that the deployment is successful:

  . 2 simple deployment:

   Discover pages: browse data interactively. Each document of each index of the matched index pattern can be accessed. You can submit search queries, filter search results, and view document data. You can also search for statistics on document data and field values that the query matches. You can also select a time and refresh frequency.

   Visualize page: visualization of design data. These visualizations can be saved separately or merged into dashboards. Visualization can be based on the following data source types 1. A new interactive search 2. A saved search 3. Existing visualization.

   Dashboard page: arrange saved visualization freely, save this dashboard and share it or reload it.

   settings page: to use kibana, you must first tell kibana which elasticsearch indexes to search for, and you can configure one or more indexes.

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Internet Technology

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report