Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

How to realize zone synchronization of object Storage configuration in distributed Storage ceph

2025-03-28 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >

Share

Shulou(Shulou.com)06/01 Report--

This article is about how distributed storage ceph implements object storage configuration zone synchronization. The editor thinks it is very practical, so share it with you as a reference and follow the editor to have a look.

1. Architecture:

Ceph is born with the concept of "two places and three centers". The double work we want to go to is two data centers, and the two data centers of Ceph can be in one cluster or in different clusters.

Second, concept:

Zone: a logical concept that contains multiple RGW instances. Zone cannot cross clusters. The data of the same zone is saved in the same set of pool.

Zonegroup: if a zonegroup contains one or more zone. If a zonegroup contains more than one zone, you must specify a zone as the master zone to handle bucket and user creation. A cluster can create multiple zonegroup, and a zonegroup can also span multiple clusters.

Realm: a realm contains one or more zonegroup. If the realm contains more than one zonegroup, you must specify one zonegroup as master zonegroup to handle system operations. A system can contain multiple realm, and resources are completely isolated between multiple realm.

RGW multi-activity is carried out between multiple zone of the same zonegroup, that is, the data between multiple zone in the same zonegroup is completely consistent, and users can read and write the same data through any zone. However, operations on metadata, such as creating buckets and creating users, can still only be done in master zone. Operations on data, such as creating objects in buckets, accessing objects, and so on, can be handled in any zone.

Configure master zone on the Cluster1 cluster

Create realm

Radosgw-admin realm create-rgw-realm=earth-default

Create master zonegroup

Delete the default zonegroup first

Radosgw-admin zonegroup delete-rgw-zonegroup=default

Create a zonegroup for china

Radosgw-admin zonegroup create-rgw-zonegroup=china-endpoints=ceph-1:7480-master-default

Create master zone

Delete the default zone first

Adosgw-admin zone delete-rgw-zone=default

Create a zone for huabei

Radosgw-admin zone create-rgw-zonegroup=china-rgw-zone=huabei-endpoints=ceph-1:7480-default-master

Create a system account to synchronize with huadong zone

Radosgw-admin user create-uid= "sync-user"-display-name= "sync user"-system

Update the zone configuration with the access and secret generated by creating the system account

Radosgw-admin zone modify-rgw-zone=huabei-access-key= {access-key}-secret= {secret}

Update period

Radosgw-admin period update-commit

Configure ceph.conf

[client.rgw.ceph-1]

Host = ceph-1

Rgw frontends = "civetweb port=7480"

Rgw_zone=huabei

Configure slave zone on the Cluster2 cluster

Pull realm from master zone

Radosgw-admin realm pull-url=ceph-2:7480-access-key= {access-key}-secret= {secret}

Note: the access key and secret here are access key and secret of the system account on master zone

Pull period

Radosgw-admin period pull-url=ceph-2:7480-access-key= {access-key}-secret= {secret}

Note: the access key and secret here are access key and secret of the system account on master zone

Create slave zone

Radosgw-admin zone create-rgw-zonegroup=china-rgw-zone=huadong\

-- access-key= {system-key}-- secret= {secret}\

-- endpoints=ceph-2:7480

Note: the access key and secret here are access key and secret of the system account on master zone

Update period

Radosgw-admin period update-commit

Note: if an authentication error occurs, restart the instance service of master zone

Configure ceph.conf

[client.rgw.ceph-2]

Host = ceph-2

Rgw frontends = "civetweb port=7480"

Rgw_zone=huadong

Verify data synchronization between zone

Execute on the secondary zone node

Radosgw-admin sync status

Create a user on the master zone node

Radosgw-admin user create-uid= "testuser"-display-name= "First User"

Create a bucket with the S3 client and put the object

Note: the same user must be created on the slave zone node to see the created bucket and the uploaded object.

Thank you for reading! On "distributed storage ceph how to achieve object storage configuration zone synchronization" this article is shared here, I hope the above content can be of some help to you, so that you can learn more knowledge, if you think the article is good, you can share it out for more people to see it!

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Internet Technology

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report