Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Elasticsearch index data snapshot backup and restore

2025-01-16 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)06/02 Report--

Recently, the amount of data buried in online ES clusters has skyrocketed, and the machine's memory and disk space is about to explode. However, this part of the data is also cold data, which does not need to be queried at present, but it cannot be directly delete, and future data analysis needs to be retained. Due to the rush to launch in the early stage, the business code did not reasonably allocate the index to be cut on a monthly basis, and the annual data was thrown into a single index, resulting in a surge in single index data to 100G+.

In order to solve the bottleneck of disk space, snapshot cold storage is done for the sliced data that is not commonly used.

Application scenarios:

Three-node ES cluster: 192.168.85.39, 192.168.85.36192.168.85.33

Find a server with disk space and set up a NFS for mounting a shared directory. 192.168.85.63 as an example

Application scenario: ES cluster three nodes 192.168.85.39192.168.85.33192.168.85.36

NFS Storage Server: 192.168.5.63

one。 Set up a NFS shared storage server (operation on 5.63)

1. Install the nfs service yum install-y nfs-utils2. Boot and start systemctl enable rpcbind.servicesystemctl enable nfs-server.service3. Start rpcbind and nfs services respectively: systemctl start rpcbind.servicesystemctl start nfs-server.service4.firewalld firewall opens NFS service listening ports for es node private network ip: 111udp port 20048 tcp port 2049 tcp and udp fully open 5. Create a local data sharing directory and set permissions mkdir / data/db/elasticsearch/backupchmod 777 / data/db/elasticsearch/backupchown-R elasticsearch:elasticsearch / data/db/elasticsearch/backup6. Configure NFS directory access vim etc/exports/data/db/elasticsearch/backup 192.168.85.39 (rw,sync,all_squash) 192.168.85.33 (rw,sync,all_squash) 192.168.85.36 (rw,sync) All_squash) exports-r / / effective exports-s / / View client installed on 7.es node (operating on 85.39 85.33 85.36) yum-y install showmount enable service: systemctl enable rpcbind.service systemctl start rpcbind.service8. Create a mount directory (operated separately on 85.39 85.33 85.36) mkdir / mnt/elasticsearchchmod 777 elasticsearch mount the shared directory to the local mount-t nfs 192.168.5.63:/data/db/elasticsearch/backup / mnt/elasticsearchdf-h / / check to confirm whether the mount is successful

two。 Create a snapshot repository

Curl-XPUT http://192.168.85.39:9002/_snapshot/backup-d' {"type": "fs", "settings": {"location": "/ mnt/elasticsearch/backup", "compress": true, "max_snapshot_bytes_per_sec": "50mb", "max_restore_bytes_per_sec": "50mb"} 'remarks: 1. You can operate 2.backup on any node of es: specify the name of the warehouse as backup, and store the generated backup files as / mnt/elasticsearch/backup3.max_snapshot_bytes_per_sec,max_restore_bytes_per_sec. Limit the size of data bytes for backup and recovery to 50mb, in order to prevent the disk IO from being too high. The higher the number, the faster the backup recovery. 50mb is the recommended value. Machines with high IO performance can not limit curl-XPUT http://192.168.85.39:9002/_snapshot/backup-d'{"type": "fs", "settings": {"location": "/ mnt/elasticsearch/backup", "compress": true}}'.

three。 Create a snapshot backup

1. Backup for full index snapshot

Curl-XPUT 192.168.85.39:9002/_snapshot/backup/snapshot_all?pretty remarks: 1. Specify backup to warehouse backup2. The snapshot name is snapshot_all

two。 For specifying a single index snapshot backup (to distinguish between different index backup directories, it is recommended that the warehouse be named with an index name)

Individual snapshot backup user_event_201810 this index 2.1 first creates a warehouse for the index curl-XPUT http://192.168.85.39:9002/_snapshot/user_event_201810-d' {"type": "fs", "settings": {"location": "/ mnt/elasticsearch/user_event_201810", "compress": true, "max_snapshot_bytes_per_sec": "50mb" "max_restore_bytes_per_sec": "50mb"} '2.2 Snapshot backup index user_event_201810 operation curl-XPUT http://192.168.85.39:9002/_snapshot/user_event_201810/user_event_201810?wait_for_completion=true-d' {"indices": "user_event_201810", "ignore_unavailable": "true", "include_global_state": false} 'remarks: 1. The warehouse created is called user_event_2018102. The file directory is / mnt/elasticsearch/user_event_2018103.indices: specify the index source as user_event_2018104. The wait_for_completion=true parameter is added to return the result status for execution completion.

four。 Restore snapshot backup data to es cluster

1. Restore operation for full index snapshot backup

Curl-XPOST http://192.168.85.39:9200/_snapshot/backup/snapshot_all/_restore remarks: 1. Specify the warehouse name backup2. Specify snapshot backup name snapshot_all

two。 Snapshot backup restore operation for a specified index

Restore curl-XPOST http://192.168.85.39:9002/_snapshot/user_event_201810/user_event_201810/_restore for index user_event_201810 snapshots Note: 1. Specify the warehouse name user_event_2018102. Specify snapshot backup name user_event_201810

Five: auxiliary operation command

1. View existing warehouse

Curl 192.168.85.39:9002/_cat/repositories?

two。 View snapshot that already exists

Curl-XGET http://192.168.85.39:9002/_snapshot? / / View all curl-XGET http://192.168.85.39:9002/_snapshot/user_event_201810/user_event_201810// view specified index

3. Delete snapshot

Curl-XDELETE http://192.168.85.39:9002/_snapshot/user_event_201810/user_event_201810// delete snapshot user_event_201810

4. Delete warehouse

Curl-XDELETE http://192.168.85.39:9002/_snapshot/user_event_201810// delete warehouse user_event_201810

Elasticsearch one of the node configuration files

Cluster.name: my-application1

Node.name: node-3

Path.data: / data/db/elasticsearch

Path.logs: / data/log/elasticsearch/logs

Path.repo: ["/ mnt/elasticsearch"]

Network.host: 192.168.85.33

Http.port: 9002

Transport.tcp.port: 9102

Node.master: true

Node.data: true

Discovery.zen.ping.unicast.hosts: ["192.168.85.39", "192.168.85.36", "192.168.85.33"]

Discovery.zen.minimum_master_nodes: 2

Indices.query.bool.max_clause_count: 10240

Http.cors.enabled: true

Http.cors.allow-origin: "*"

NFS

Mount-t nfs 192.168.5.63:/data/db/elasticsearch/backup / mnt/elasticsearch

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report