Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Example Analysis of distributed Storage ceph object Storage configuring zone synchronization

2025-04-06 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)05/31 Report--

This article mainly shows you the "distributed storage ceph object storage configuration zone synchronization example analysis", the content is easy to understand, well-organized, hope to help you solve your doubts, the following let Xiaobian lead you to study and learn "distributed storage ceph object storage configuration zone synchronization example analysis" this article.

1. Architecture:

Ceph is born with the concept of "two places and three centers". The double work we want to go to is two data centers, and the two data centers of Ceph can be in one cluster or in different clusters.

Second, concept:

Zone: a logical concept that contains multiple RGW instances. Zone cannot cross clusters. The data of the same zone is saved in the same set of pool.

Zonegroup: if a zonegroup contains one or more zone. If a zonegroup contains more than one zone, you must specify a zone as the master zone to handle bucket and user creation. A cluster can create multiple zonegroup, and a zonegroup can also span multiple clusters.

Realm: a realm contains one or more zonegroup. If the realm contains more than one zonegroup, you must specify one zonegroup as master zonegroup to handle system operations. A system can contain multiple realm, and resources are completely isolated between multiple realm.

RGW multi-activity is carried out between multiple zone of the same zonegroup, that is, the data between multiple zone in the same zonegroup is completely consistent, and users can read and write the same data through any zone. However, operations on metadata, such as creating buckets and creating users, can still only be done in master zone. Operations on data, such as creating objects in buckets, accessing objects, and so on, can be handled in any zone.

Configure master zone on the Cluster1 cluster

Create realm

Radosgw-admin realm create-rgw-realm=earth-default

Create master zonegroup

Delete the default zonegroup first

Radosgw-admin zonegroup delete-rgw-zonegroup=default

Create a zonegroup for china

Radosgw-admin zonegroup create-rgw-zonegroup=china-endpoints=ceph-1:7480-master-default

Create master zone

Delete the default zone first

Adosgw-admin zone delete-rgw-zone=default

Create a zone for huabei

Radosgw-admin zone create-rgw-zonegroup=china-rgw-zone=huabei-endpoints=ceph-1:7480-default-master

Create a system account to synchronize with huadong zone

Radosgw-admin user create-uid= "sync-user"-display-name= "sync user"-system

Update the zone configuration with the access and secret generated by creating the system account

Radosgw-admin zone modify-rgw-zone=huabei-access-key= {access-key}-secret= {secret}

Update period

Radosgw-admin period update-commit

Configure ceph.conf

[client.rgw.ceph-1]

Host = ceph-1

Rgw frontends = "civetweb port=7480"

Rgw_zone=huabei

Configure slave zone on the Cluster2 cluster

Pull realm from master zone

Radosgw-admin realm pull-url=ceph-2:7480-access-key= {access-key}-secret= {secret}

Note: the access key and secret here are access key and secret of the system account on master zone

Pull period

Radosgw-admin period pull-url=ceph-2:7480-access-key= {access-key}-secret= {secret}

Note: the access key and secret here are access key and secret of the system account on master zone

Create slave zone

Radosgw-admin zone create-rgw-zonegroup=china-rgw-zone=huadong\

-- access-key= {system-key}-- secret= {secret}\

-- endpoints=ceph-2:7480

Note: the access key and secret here are access key and secret of the system account on master zone

Update period

Radosgw-admin period update-commit

Note: if an authentication error occurs, restart the instance service of master zone

Configure ceph.conf

[client.rgw.ceph-2]

Host = ceph-2

Rgw frontends = "civetweb port=7480"

Rgw_zone=huadong

Verify data synchronization between zone

Execute on the secondary zone node

Radosgw-admin sync status

Create a user on the master zone node

Radosgw-admin user create-uid= "testuser"-display-name= "First User"

Create a bucket with the S3 client and put the object

Note: the same user must be created on the slave zone node to see the created bucket and the uploaded object.

These are all the contents of the article "sample Analysis of distributed Storage ceph object Storage configuration zone synchronization". Thank you for reading! I believe we all have a certain understanding, hope to share the content to help you, if you want to learn more knowledge, welcome to follow the industry information channel!

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report