Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

How to use Glusterfs's peer/volume/brick

2025-03-26 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >

Share

Shulou(Shulou.com)06/01 Report--

This article introduces the knowledge of "how to use Glusterfs's peer/volume/brick". In the operation of actual cases, many people will encounter such a dilemma, so let the editor lead you to learn how to deal with these situations. I hope you can read it carefully and be able to achieve something!

1 、 Peer

In the case of peer, the current glusterfs cluster is composed of current nodes and 11 and 12 nodes.

List-1

[root@master1 /] # gluster peer statusNumber of Peers: 192.168.33.11Uuid: 8c22b08f-7232-4ac9-b5d8-8262db2d4ee7State: Peer in Cluster (Connected) Hostname: 192.168.33.12Uuid: 7906f9a9-c58b-4c6e-93af-f4d9960b6220State: Peer in Cluster (Connected)

You can also use peer list to view it, as shown in List-2

List-2

[root@master1 /] # gluster pool listUUID Hostname State8c22b08f-7232-4ac9-b5d8-8262db2d4ee7 192.168.33.11 Connected 7906f9a9-c58b-4c6e-93af-f4d9960b6220 192.168.33.12 Connected a2d23b65-381e-45ea-a488-e9fee45e5928 localhost Connected

Add the worker node, such as List-3, and add the worker node 13, either using hostname or using the IP address.

List-3

[root@master1 /] # gluster peer probe-Hriph is an invalid addressUsage:peer probe {|} [root@master1 /] # gluster peer probe 192.168.33.13

Cancel the work node, as shown in List-4 below, use the detach command to cancel, and then use peer status to view it after execution, and you will see the effect.

List-4

Gluster peer detach HOSTNAMEgluster peer detach 192.168.33.132, Volume Volume

Create a volume, as shown in List-5 below. The / data_gluster directory on 10-11-12 must exist, otherwise it will report that the directory does not exist. If there is a warning or other information, you can add force. It should be noted that without replica, only one copy will be retained in the cluster and will not be copied to other nodes.

List-5

Gluster volume create hive_db_volume replica 3 192.168.33.10:/data_gluster/hive_db_volume\ 192.168.33.11:/data_gluster/hive_db_volume 192.168.33.12:/data_gluster/hive_db_volume# plus forcegluster volume create hive_db_volume replica 3 192.168.33.10:/data_gluster/hive_db_volume\ 192.168.33.11:/data_gluster/hive_db_volume 192 .168.33.12: / data_gluster/hive_db_volume force

Enable the data volume and start the volume with volume start, as shown in List-6, because I have already started that volume, so it is prompted that the volume has started.

List-6

[root@master1 /] # gluster volume start hive_db_volumevolume start: hive_db_volume: failed: Volume hive_db_volume already started# View Volume Information [root@master1 /] # gluster volume info hive_db_volume Volume Name: hive_db_volumeType: ReplicateVolume ID: b34d2970-27b9-421a-8680-c242b38946e5Status: StartedSnapshot Count: 0Number of Bricks: 1 x 3 = 3Transport-type: tcpBricks:Brick1: 192.168.33.10:/data_gluster/hive_db_volumeBrick2: 192.168.33 .11: / data_gluster/hive_db_volumeBrick3: 192.168.33.12:/data_gluster/hive_db_volumeOptions Reconfigured:transport.address-family: inetnfs.disable: onperformance.client-io-threads: off

After that, you have to mount it before you can use it. You cannot directly operate / data_gluster/hive_db_volume, such as List-7. After mounting, we can store data under the / mnt/gluster/hive_db of 10 this machine. Note that we manually write data to / mnt/gluster/hive_db,glusterfs and automatically synchronize it to / data_gluster/hive_db_volume. We cannot directly operate the directory / data_gluster/hive_db_volume. Let alone manually delete the data in / data_gluster/hive_db_volume.

List-7

# create a directory under / mnt to mount mkdir-p / mnt/gluster/hive_db#. Hive_db_volume is the volume we created earlier, mount-t glusterfs 192.168.33.10:/hive_db_volume / mnt/gluster/hive_db.

Operate the file under / mnt/gluster/hive_db on the 10 machine, look at the situation on other machines, such as List-8, create the file hello, write "hello world", and then enter / data_gluster/hive_db_volume-- because List-5 we specified the path is / data_gluster/hive_db_volume, what we see is the file we just created. Let's go see / data_gluster/hive_db_volume on 11 and 12.

We can only operate under the / mnt/gluster/hive_db of 10, and it doesn't work under the directory of 11 + 12, because 10 is mounted in List-7.

List-8

[root@master1 hive_db] # pwd/mnt/gluster/hive_ DB [root @ master1 hive_db] # more hello hello world# and then enter / data_gluster/hive_db_volume for viewing The following is [root@master1 hive_db_volume] # pwd/data_gluster/hive_db_ volume [root @ master1 hive_db_volume] # more hello hello world# view [root@node1 hive_db_volume] # pwd/data_gluster/hive_db_ volume [root @ node1 hive_db_volume] # more hello hello world# on 12 is the same 3, Brick

Delete brick, as follows: List-9

Use volume remove-brick volume name

The replica 2 parameter, which started with a replication number of 3 when we created the volume, is now 2.

The last 192.168.33.12:/data_gluster/hive_db_volume parameter, indicating that 192.168.33.12:/data_gluster/hive_db_volume will not store data for volume hive_db_volume

List-9

[root@master1 /] # gluster volume remove-brick hive_db_volume replica 2 192.168.33.12:/data_gluster/hive_db_volume forceRemove-brick force will not migrate files from the removed bricks, so they will no longer be available on the volume.Do you want to continue? (YBO) yvolume remove-brick commit force: success

After the operation of List-9, check the details of the volume, as follows: List-10, it is found that compared with List-6, there is one less brick, so you should roughly understand what brick is. Generally speaking, the data of the volume is stored in these three brick. Glusterfs automatically helps us keep the synchronization of each brick, and we can also delete brick, so that the number of brick stored data is reduced.

List-10

[root@master1 hive_db_volume] # gluster volume info hive_db_volume Volume Name: hive_db_volumeType: ReplicateVolume ID: b34d2970-27b9-421a-8680-c242b38946e5Status: StartedSnapshot Count: 0Number of Bricks: 1 x 2 = 2Transport-type: tcpBricks:Brick1: 192.168.33.10:/data_gluster/hive_db_volumeBrick2: 192.168.33.11:/data_gluster/hive_db_volumeOptions Reconfigured:transport.address-family: inetnfs.disable: onperformance.client-io-threads: off

Add brick, and after deleting brick, you can also add it, as shown in the following List-11. Use add-brick to add volumes, and then check the volume information. It is found that there is more than List-10,brick, that is, more brick of 192.168.33.12:/data_gluster/hive_db_volume.

Note: gluster volume status can view all volume status information.

List-11

[root@master1 /] # gluster volume add-brick hive_db_volume replica 3 192.168.33.12:/data_gluster/hive_db_volume forcevolume add-brick: success [root@master1 /] # gluster volume info hive_db_volume Volume Name: hive_db_volumeType: ReplicateVolume ID: b34d2970-27b9-421a-8680-c242b38946e5Status: StartedSnapshot Count: 0Number of Bricks: 1 x 3 = 3Transport-type: tcpBricks:Brick1: 192.168.33.10:/data_gluster/hive_db_volumeBrick2: 192. 168.33.11:/data_gluster/hive_db_volumeBrick3: 192.168.33.12:/data_gluster/hive_db_volumeOptions Reconfigured:transport.address-family: inetnfs.disable: onperformance.client-io-threads: off

You can also replace brick with the replace-brick command, as shown in List-12

Volume hive_db_volume already exists

The brick of 192.168.33.12:/data_gluster/hive_db_volume is replaced with 192.168.33.12:/data_gluster/hive_db_volume2. In this case, it is a different directory on the same machine, or it can be replaced with another machine.

List-12

Gluster volume replace-brick hive_db_volume 192.168.33.12:/data_gluster/hive_db_volume 192.168.33.12:/data_gluster/hive_db_volume2 commit force

The volume can only be used after it is started. We can stop the volume and use stop, such as List-13, which cannot be used after it is stopped.

List-13

[root@master1 hive_db_volume] # gluster volume stop hive_b_volume

Delete volumes, use the delete command to delete volumes

List-14

[root@master1 hive_db_volume] # gluster volume delete hive_b_volume "how to use Glusterfs's peer/volume/brick" ends here. Thank you for reading. If you want to know more about the industry, you can follow the website, the editor will output more high-quality practical articles for you!

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Internet Technology

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report