In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-03-31 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)06/02 Report--
1. Introduction 1.1 introduction
For storage, of course, the larger the capacity, the better, and it is best to write indefinitely (, there is no such a good thing); but we can delete old and useless data, but we can't delete data manually all the time. We can delete old data according to certain rules, and then we can set object life cycle rules and make data deletion rules. For customers, the cluster seems to be able to write unlimited amounts (ha, so the back end is real. But deleting data is risky, so be careful! )
1.2 prerequisites
You need to set the object life cycle, which means that your cluster is already in normal use, and the object gateway service provides services normally.
2. Server configuration
No matter what configuration you make, you need to configure your service process. The same is true for the life cycle of ceph object gateways.
# # configuration settings # vim / etc/ceph/ceph.conf... # # running period rgw_lifecycle_work_time = "00:00-24:00" # # interval rgw_lc_debug_interval = "10"... # # restart object gateway service # systemctl restart ceph-radosgw.target
3. S3 browser setting lifecycle (Windows client)
Through the S3 browser client software (this does not want to be described in detail for the time being, but will be introduced separately in a later article).
4. Set up the lifecycle (Linux client) 4.1 install boto3
Install boto3, you can also install boto; (but the subsequent scripts in this article are written according to boto3, boto needs to write their own, the difference is not big, or contact me Oh, free help! )
# pip install boto34.3 configuration script (python) # cat rgwimplifecyclearset.pyknoxamusrbinash Env python2.7#-*- coding: utf-8-* .import boto3from botocore.client import Configimport datetime## based on the object gateway user information aws_access_key_id = 'XXX'aws_secret_access_key =' XXX'## needs to set the rule bucketbucket_name = 'XXX'# aws2s3 = boto3.client (' s3 steps, region_name=None Use_ssl=False, # # url configure endpoint_url=' {http://ceph.com}', aws_access_key_id=aws_access_key_id, aws_secret_access_key=aws_secret_access_key according to the actual situation Config=Config (S3 = {'addressing_style':' path'}) print s3.put_bucket_lifecycle (Bucket=bucket_name, LifecycleConfiguration= {'Rules': [{' Status': 'Enabled',' Prefix':'/') 'Expiration': {' Days': 1}, 'ID':' 79m9n5aucsjb1nqi1687nzbelqdkli3qwbtgzsm7n4nkfv6'}],} print s3.get_bucket_lifecycle (Bucket=bucket_name)
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.