In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-04-02 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)05/31 Report--
This article mainly introduces the example analysis of Redhat object gateway, which has a certain reference value. Interested friends can refer to it. I hope you will learn a lot after reading this article.
# # 1. Configure # # change the default port
# migrate from apache to civetweb
An apache-based configuration looks like the following:
[client.radosgw.gateway-node1] host= {hostname} keyring = / etc/ceph/ceph.client.radosgw.keyringrgw socket path = "" log file = / var/log/radosgw/client.radosgw.gateway-node1.logrgw frontends = fastcgi socket_port=9000 socket_host=0.0.0.0rgw print continue = false
To change it to Civetweb, just remove: rgw_socket_path rgw_print_continue to get:
[client.radosgw.gateway-node1] host = {hostname} keyring = / etc/ceph/ceph.client.radosgw.keyringlog file = / var/log/radosgw/client.radosgw.gateway-node1.logrgw_frontends = civetweb port=80
Restart rgw:
Systemctl restart ceph-radosgw.service
# Civetweb supports Civetweb SSL based on HAProxy and keepalived before v2.0 using SSL. In v2.0 and above, Civetweb supports the OpenSSL library to provide TLS (Transport Layer Security)
# 1. Create a self-signed authentication
# generate rsa key openssl genrsa-des3-out server.key 1024 # generate the corresponding csr file openssl req-new-key server.key-out server.csr cp server.key server.key.orig# remove the key file protection password openssl rsa-in server.key.orig-out server.key # self-signed openssl x509-req-days 3650-in server.csr-signkey server.key-out server.crt cp server.crt server.pemcat server.key > > server.pem
Note: after the second step, the setting of common name (CN) can be set to * .instance01.com
# 2. You need to use the following soft connection, otherwise there will be an error. You can view the error message in the log_file file.
Ln-s / lib64/libssl.so.1.0.1e / usr/lib64/libssl.soln-s / lib64/libcrypto.so.1.0.1e / usr/lib64/libcrypto.so
# 3. Configure Port Information
[client.rgw.instance01] host = ceph02keyring = / var/lib/ceph/radosgw/ceph-rgw.instance01/keyringlog_file = / var/log/radosgw/ceph-client.rgw.instance01.logrgw_dns_name = instance01.comrgw thread pool size=1000rgw_enable_static_website=truergw_frontends = "civetweb port=443s ssl_certificate=/etc/ceph/private/server.pem" error_log_file=/var/log/radosgw/civetweb.error.logaccess_log_file=/var/log/radosgw/civetweb.access.log
# 5. Add the domain name to etc/hosts:
192.168.141.129 website02.instance01.com192.168.141.129 website01.instance01.com
# 6. Direct access to https://website01.instance01.com: (certificate error prompt)
# 7. The browser adds a custom certificate, such as server.crt here, to access
Take IE as an example:
Set-> Internet options-> content-> Certificate
Select trusted Root Certificate Authority-> Import
# Exporting namespaces to NFS-GANESHA NFS-GANESHA is a user-space server that supports NFSv2, NFSv3 and NFSv4. The running platforms it supports include Linux,BSD variants and POSIX-compliant Unixes.
In the v2.0 release, the export of S3 object namespaces through NFS v4.1 is provided.
Note: this feature is not commonly used and only supports S3.
Bucket is a directory in NFS that inherits the S3 convention and supports the creation of files or folders.
# # 2. Management (CLI)
The maximum number of pg is set on each osd, if exceeded, there will be a warning (default is 300)
Mon_pg_warn_max_per_osd = n
# Storage policy ceph object gateway uses placement targets to store bucket and object data, and placement target to specify bucket and object storage pool. If placement targets is not configured, buckets and objects will be stored in the storage pool configured by zone where the gateway instance resides (using the default target and pools).
The storage policy provides the object gateway with access to the storage policy, for example, specifying a special storage type (SSDs,SAS drivers, SATA drivers)
Create a new pool.rgw.buckets.special, this pool is a specific storage policy. For example, a custom erasure code with a special crush ruleset, number of copies, or number of pg.
Get the region configuration and import it into the file
Radosgw-admin region get > region.json
Join special-placement to placement_target
{"name": "default", "api_name": "", "is_master": "true", "endpoints": [], "hostnames": [], "master_zone": "", "zones": [{"name": "default", "endpoints": [] "log_meta": "false", "log_data": "false", "bucket_index_max_shards": 5}], "placement_targets": [{"name": "default-placement", "tags": []} {"name": "special-placement", "tags": []}], "default_placement": "default-placement"}
Get zone configuration
Radosgw-admin zone get > zone.json
Edit the zone file and add placement policy key.
{"domain_root": ".rgw", "control_pool": ".rgw.control", "gc_pool": ".rgw.gc", "log_pool": ".log", "intent_log_pool": ".usage-log", "usage_log_pool": ".usage", "user_keys_pool": ".users" "user_email_pool": ".users.email", "user_swift_pool": ".users.swift", "user_uid_pool": ".users.uid", "system_key": {"access_key": "," secret_key ":"} "placement_pools": [{"key": "default-placement", "val": {"index_pool": ".rgw.buckets.index", "data_pool": ".rgw.buckets" "data_extra_pool": ".rgw.buckets.index"}, {"key": "special-placement", "val": {"index_pool": ".rgw.buckets.index" "data_pool": ".rgw.buckets.special", "data_extra_pool": ".rgw.buckets.special"}]}
Write
Radosgw-admin zone set
< zone.json 更新region map radosgw-admin regionmap update 重启rgw 使用 curl -i http://10.0.0.1/swift/v1/TestContainer/file.txt -X PUT -H "X-Storage-Policy: special-placement" -H "X-Auth-Token: AUTH_rgwtxxxxxx" ###桶的分片 index_pool中存放了桶的索引数据,默认是.rgw.bucket.index。用户可能会存放很多对象(几千万甚至几亿对象)到单个bucket中,如果不用网关管理员接口去设置每个桶的最大配额,那桶的索引会遭受性能降级,当用户存放大量的对象时。 当允许每个桶存储大量的对象时,v1.3可以切片桶目录,来解决性能瓶颈。 rgw_override_bucket_index_max_shards 以上参数是设置每个桶的最大切片数。默认是0(0代表关闭桶的索引切片)。 设置方式 可以在[global]标题下增加,然后重启。 对于联合配置,每个zone都有index_pool配置,可以针对不同区域的网关配置这个参数 radosgw-admin region get >Region.json opens the json file and saves and resets for different zone edits bucket_index_max_shards:
Radosgw-admin region set < region.json
Update region map:
Radosgw-admin regionmap update-name client.rgw.ceph-client
# Realms A realm represents a global and unique namespace, which consists of one or more zonegroup and contains one or more zone,zone containing buckets in which objects are stored in turn. Realm enables Ceph object gateways to support multiple namespaces and configurations.
A realm contains the concept of period (representing the validity period of a realm). Each period represents the status of the zonegroup and the configuration of the zone in time. Every time you need to make some changes to zonegroup or zone, you need to update the cycle and then submit
For backward compatibility with v1.3 or earlier versions, Ceph does not create realm by default. However, for a better experience, redhat will create a realm on the new cluster
# finding orphaned objects in a healthy cluster, there should not be any isolated objects, but isolated objects may occur in some cases:
The rgw fails during an operation, which may make the object orphaned.
Unknown bug
Steps:
Create a new log pool
Rados mkpool .log
Search for orphaned object
Radosgw-admin orphans find-pool=.rgw.buckets-job-id=abc123
Clean up searched orphaned object data
Radosgw-admin orphans finish-job-id=abc123
# # 3. Multiple data centers
A single zone configuration typically consists of a zonegroup that contains a zone and one or more rgw instances. Gateway requests can be balanced in these rgw. In a single zone configuration, a general multi-gateway instance points to a single ceph cluster. However, redhat supports several multi-site configuration items for rgw
A good configuration of multiple zone consists of one zonegroup and multiple zone, and each zone is composed of one or more rgw. Each zone is supported by its own ceph cluster. In a zone group, multi-zone can provide disaster recovery capability. Each zone in 2. 0 is active and can accept write operations. In order to be able to recover from disasters, more active zone can play a fundamental role for cdn.
Multiple zone group, formerly known as region,ceph, still supports multiple zone group, with each zone group consisting of one or more zone. In the same realm, objects in the same zonegroup share a global namespace and have a unique object ID across zonegroup and zone
Multi-realms starts with 2. 0 and supports the concept of realm. It can be a zonegroup or multiple zonegroup and a globally unique namespace. Multi-realm provides the ability to support many configurations and namespaces.
From redhat v2.0 and above, dual active zone can be configured, meaning that it can be written to a non-primary zone.
Multi-site configurations are stored in a container called realm,realm that contains zone group, zone and period,period are made up of multiple epoch, and epoch is used to track configuration changes.
In this guide, it is assumed that there are rgw1,rgw2,rgw3,rgw4. A multi-site configuration requires a master zone group and a master zone. In addition, each zone group requires a primary zone. Zone groups may have one or more non-primary zones
Rgw1 is the primary zone groups in the primary zone
Rgw2 is the non-primary zone in the primary zone groups
Rgw3 is the primary zone of a non-primary zone groups
Rgw4 is the non-primary zone groups of the non-primary zone
# the name of pools for zone's pool {zone-name} .pool-name, for example, a zone is called us-east. You can name pool as follows:
.rgw.root
Us-east.rgw.control
Us-east.rgw.data.root
Us-east.rgw.gc
Us-east.rgw.log
Us-east.rgw.intent-log
Us-east.rgw.usage
Us-east.rgw.users.keys
Us-east.rgw.users.email
Us-east.rgw.users.swift
Us-east.rgw.users.uid
Us-east.rgw.buckets.index
Us-east.rgw.buckets.data
Us-east.rgw.meta
# Update object Gateway redhat recommends restarting the main zonegroup and the main zone first. And then non-master.
# configure a master zone
Create a realm a realm that contains a multi-site configuration of zone groups and zones, and is also used to implement a globally unique namespace in your domain.
Radosgw-admin realm create-- rgw-realm= {realm-name} [--default]
Create a primary zone group
Radosgw-admin zonegroup create-rgw-zonegroup= {name}-endpoints= {url} [--rgw-realm= {realm-name} |-- realm-id= {realm-id}]-- master-- default
Create a primary zone
Radosgw-admin zone create-rgw-zonegroup= {zone-group-name}\-rgw-zone= {zone-name}\-master-default\-endpoints= {http://fqdn}[,{http://fqdn}
Create a system user
Update period
Update the ceph configuration file on the main zone, add the rgw_zone configuration item, and the instance entry of zone.
[client.rgw. {instance-name}]... rgw_zone= {zone-name}
Restart rgw
# configure non-master zone
Pull realm
Radosgw-admin realm pull-url= {url-to-master-zone-gateway}-access-key= {access-key}-secret= {secret}
Pull period
Radosgw-admin period pull-url= {url-to-master-zone-gateway}-access-key= {access-key}-secret= {secret}
To create a non-master zone in v2.0 or above, all zone are configured with dual active configuration by default: the object client can write data to any zone. The zone to which the data is written will copy all the data to other zone of the same zone group. If the non-master zone does not accept write operations, you need to use the parameter-read-only tag to create the master / slave configuration when creating the zone, which is between the master zone and the non-master zone. In addition, you need to use the access key and secret key of the system user.
# radosgw-admin zone create-- rgw-zonegroup= {zone-group-name}\-- rgw-zone= {zone-name}-- endpoints= {url}\-- access-key= {system-key}-- secret= {secret}\-- endpoints= http://{fqdn}:80\ [--read-only]
Update ceph profile Update ceph profile on hosts that are not primary zone
[client.rgw.rgw2] host = rgw2rgw frontends = "civetweb port=80" rgw_zone=us-west
Update period
Start the gateway
# Fault recovery
Makes the non-primary zone the primary zone and the default zone of
# radosgw-admin zone modify-rgw-zone= {zone-name}-master-default
By default, ceph is a dual-active configuration, and if the cluster is active and standby, the non-primary zone is read-only. Remove the-- read-only state to get the write operation.
Update period takes effect
Restart rgw
If the original owner zone has been restored
Pull the period of the current primary zone from the restored zone.
Make the restored zone primary and the default zone
Update period
Restart rgw on the primary zone
If the non-primary zone needs to be read-only, configure it
Update period
Restart rgw on a non-primary zone
# migrate from a single data center to multiple data centers
Create realm
Rename the default zone and zonegroup
Configure the master zonegroup to join the realm
Radosgw-admin zonegroup modify-rgw-realm=-rgw-zonegroup=-endpoints http://:80-master-default
Configure the primary zone
# radosgw-admin zone modify-rgw-realm=-rgw-zonegroup=\-rgw-zone=-endpoints http://:80\-access-key=-secret=\-master-default
Create a system user
# radosgw-admin user create-uid=-display-name= ""\-access-key=-secret=-system
Submit update configuration
Restart rgw
Thank you for reading this article carefully. I hope the article "sample Analysis of Redhat object Gateway" shared by the editor will be helpful to you. At the same time, I also hope you will support us and pay attention to the industry information channel. More related knowledge is waiting for you to learn!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 207
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.