Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

How to use Docker Compose to build and deploy ElasticSearch

2025-04-22 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Development >

Share

Shulou(Shulou.com)06/01 Report--

This article focuses on "how to use Docker Compose to build and deploy ElasticSearch", interested friends may wish to take a look. The method introduced in this paper is simple, fast and practical. Let's let the editor take you to learn how to use Docker Compose to build and deploy ElasticSearch.

What is Elasticsearch?

Elasticsearch is a distributed open source search and analysis engine for all types of data, including text, numbers, geospatial, structured and unstructured data. Elasticsearch was developed on the basis of Apache Lucene and was first released by Elasticsearch N.V. (now Elastic) in 2010. Known for its simple REST style API, distributed features, speed, and scalability, Elasticsearch is the core component of Elastic Stack; Elastic Stack is a set of open source tools for data collection, enrichment, storage, analysis, and visualization. Elastic Stack is usually referred to as ELK Stack (referring to Elasticsearch, Logstash and Kibana). At present, Elastic Stack includes a series of rich lightweight data collection agents, collectively known as Beats, which can be used to send data to Elasticsearch.

What is the purpose of Elasticsearch?

Elasticsearch performs well in terms of speed and scalability, and has the ability to index many types of content, which means it can be used in a variety of use cases:

Application search

Website search

Enterprise search

Log processing and analysis

Infrastructure indicators and container monitoring

Application performance monitoring

Geospatial data analysis and visualization

Safety analysis

Business analysis

How does Elasticsearch work?

Raw data is entered into Elasticsearch from multiple sources, including logs, system metrics, and network applications. Data collection refers to the process of parsing, standardizing and enriching the original data before indexing in Elasticsearch. Once this data is indexed in Elasticsearch, users can run complex queries against their data and use aggregations to retrieve complex summaries of their own data. In Kibana, users can create powerful visualization based on their own data, share dashboards, and manage Elastic Stack.

What is the Elasticsearch index?

An Elasticsearch index refers to a collection of documents that are related to each other. Elasticsearch stores data as an JSON document. Each document establishes a relationship between a set of keys (the name of a field or property) and their corresponding values (strings, numbers, Boolean values, dates, numeric groups, geographic locations, or other types of data).

Elasticsearch uses a data structure called inverted index, which is designed to allow full-text search to be done very quickly. The inverted index lists each unique word that appears in all documents, and all documents that contain each word can be found.

During the indexing process, Elasticsearch stores the document and builds an inverted index so that users can search the document data in near real time. The indexing process is initiated in the index API, which allows you to add JSON documents to or change JSON documents in a specific index.

Description

Because my local configuration is low, I cannot start multiple virtual machines, and the cluster of ES needs to use different ip, so I only build a stand-alone machine, not a cluster.

First, directory preparation mkdir / docker/esmkdir / docker/es/datamkdir / docker/es/configmkdir / docker/es/ plugins II. Es configuration preparation cd / docker/esvi elasticsearch.yml

Use the following configuration:

# Cluster name cluster.name: elasticsearch-cluster# node name node.name: es-node-1# binding host,0.0.0.0 represents the ipnetwork.host of the current node: 0.0.0.0. set the ip address for other nodes to interact with this node. If it is not set, the value must be a real ip address (native ip) network.publish_host: 192.168.200.13 setting the http port for external services The default is 9200http.port: 920 domains to set the tcp port for the interaction between nodes. The default is whether 9300transport.tcp.port: 930 domains supports cross-domain, and the default is falsehttp.cors.enabled: true#. If the setting allows cross-domain, default is *, it means that all domain names are supported. If we only allow access to some websites, we can use regular expressions. For example, only local addresses are allowed. / https?:\ / localhost (: [0-9] +)? / http.cors.allow-origin: "*" # indicates whether this node can act as a master node node.master: whether true# acts as a data node node.data: true# all master-slave nodes ip:port#discovery.seed_hosts: ["192.168.200.135node.master 9300"] # there is only one local node and cannot start normally Note # first this parameter determines how many nodes need to communicate to prevent brain fissure during the selection process N/2+1discovery.zen.minimum_master_nodes: initialize the primary node # cluster.initial_master_nodes: ["es-node-1"] # there is only one node locally, can not start normally, note 3 first, prepare docker-compose.ymlvi docker-compose.yml

The contents are as follows:

Version: '3'services: elasticsearch: image: elasticsearch:6.8.13 restart: always hostname: es1 container_name: es-single volumes:-/ docker/es/data:/usr/share/elasticsearch/data-/ docker/es/plugins:/usr/share/elasticsearch/plugins-/ docker/es/config/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml environment:-"ES_JAVA_OPTS=-Xms512m-Xmx512m"-discovery . type=single-node ports:-'9200 9200' # java, Cluster communication port-'9300 true 9300' # http communication port privileged: true # environment variable 4, launch container docker-compose up-d

5. Check docker-compose ps

If startup is right, you can use the docker container logs container id/es-single logs to view the startup log

If java.nio.file.AccessDeniedException: / usr/share/elasticsearch/data/nodes appears in the startup log, you need to set the permission chmod 777 / docker/es/data to the data directory

At this point, I believe you have a deeper understanding of "how to use Docker Compose to build and deploy ElasticSearch". You might as well do it in practice. Here is the website, more related content can enter the relevant channels to inquire, follow us, continue to learn!

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Development

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report