Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Automatically create and delete EBS snapshots by using lambda function

2025-01-16 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)06/02 Report--

Using the lambda function to automatically create and delete EBS snapshots this article refers to the official documentation of AWS China on building automated EBS snapshot cycles, and the reference link "https://amazonaws-china.com/cn/blogs/china/construct-ebs-life-circle-management/" this article is different from this article, which does not use the dynamoDB service, but only completes the snapshot backup of EBS through lambda. Of course, automatic snapshots must be used with the automatic deletion feature, otherwise the snapshot capacity will become larger and larger, which will virtually increase the IT cost of the enterprise.

When using Aliyun and Tencent Cloud platforms, I always felt that automatic snapshot policy was the most basic feature of cloud vendors, so I maintained this mindset after taking over aws cloud projects. It was not until one of the aws cloud projects I was in charge of migrated and started to make snapshot backups that I found that the AWS China platform did not have the function to create snapshot policies. Instead, I had to write the Lambda function myself and trigger it through Cloudwatch event. A thousand words are omitted here.

Steps to create / delete snapshots: tag---- the EBS disks that need to be snapped > roles to create policies and run lamda functions-> create functions-> add triggers and log snapshots to create automation 1, label EBS volumes that need to be snapshots, label at least two groups; KeyValue states that Name user customizations cannot contain Chinese characters SnapshotSnapshot required entries, and Key is Snapshot2, create policies and roles

Step1: go to the IAM console, create a policy, select json format and enter the following string (which means the account has the ability to view information and manipulate snapshots of EC2)

{"Version": "2012-10-17", "Statement": [{"Effect": "Allow", "Action": ["logs:*"], "Resource": "*"}, {"Effect": "Allow", "Action": "ec2:Describe*", "Resource": "*"} {"Effect": "Allow", "Action": ["ec2:CreateSnapshot", "ec2:DeleteSnapshot", "ec2:CreateTags", "ec2:ModifySnapshotAttribute", "ec2:ResetSnapshotAttribute"], "Resource": ["*"]}]}

Click View Policy to see that the json file specifies partial permissions for the EC2 and Log services. Enter the policy name lambda_ebs_snapshot and description and click Save.

Step2, create roles

Select the trusted entity as (lambda)-> set the policy to (lambda_ebs_snapshot)-> create a tag-> enter the role name and description and click Save.

Note: the trusted entity here must select lambda, otherwise an unauthenticated permission error will occur when calling the lambda function using this role later. 3. Create a function

Step1, enter the lambda console, click the function on the left, and click the new function in the upper right corner.

Go to the create function page, enter the name of the function as my_ebs_snapshots, and run the platform Python3.6, select the existing role here, click on the existing role, select the role you just created, and click create.

After the step2 and function are created, enter the configuration phase.

If the previous actions are correct, you can see here that our lambda function has permission to operate on both Log and EC2.

Click below to edit the code online and enter the code to automatically back up the ebs snapshot.

Import boto3import os,timefrom botocore.exceptions import ClientErrorfrom datetime import datetime, timedelta, timezoneclient = boto3.client ('ec2') ec2 = boto3.resource (' ec2') def lambda_handler (event, context): os.environ ['TZ'] =' Asia/Shanghai'time.tzset () i=time.strftime ('% X% x% Z') # set volume id, get volume who has a tag-key is' Snapshot'describe_volumes=client.describe_volumes (Filters= [{'Name':' tag-key','Values': ['Snapshot') }]) volume_id_list = [] for vol in describe_volumes ['Volumes']: volume_id_list.append (vol.get (' VolumeId')) # set snapshotfor volume_id in volume_id_list:volume = ec2.Volume (volume_id) for tags in volume.tags:if (tags.get ('Key') = =' Name'): volume_name = tags.get ('Value') description = volume_name +' volume snapshot is created at'+ I try:response = client.create_snapshot (Description=description VolumeId=volume_id) except:print ('Create Snapshot occured error, Volume id is' + volume_id) else:print ('Snapshot is created succeed, Snapshot id is' + response.get ('SnapshotId')) 4, create a trigger

Step1, click CloudWatch Event on the left to start the configuration. The following indicates that the execution function is triggered at 23:00 every evening.

When you are finished, click add.

Step2, then start configuring CloudWatch log, and click add.

After the above steps have been completed, click on the upper right corner to save.

Attached: Lambda function execution log

Snapshot deletion automation

Here, take keeping 6-day snapshot data as an example, you can test and adjust it according to the actual situation. The my_ebs_snapshot_delete function code is as follows:

Import reimport boto3import os,timefrom botocore.exceptions import ClientErrorfrom datetime import datetime, timedelta, timezoneclient = boto3.client ('ec2') ec2 = boto3.resource (' ec2') def lambda_handler (event Context): strftime 0 os.environ ['TZ'] =' Asia/Shanghai' time.tzset () # i=time.strftime ('% X% x% Z') i=time.strftime ('% x% Z') j = ((datetime.now ()-timedelta (days=7)) .strftime ('% x% Z')) print (j) # set volume id Get volume who has a tag-key is' Snapshot' describe_volumes=client.describe_volumes (Filters= [{'Name':' tag-key', 'Values': [' Snapshot'] }]) volume_id_list = [] for vol in describe_volumes ['Volumes']: volume_id_list.append (vol.get (' VolumeId')) # set snapshot for volume_id in volume_id_list: volume = ec2.Volume (volume_id) # print (volume_id) for tags in volume. Tags: if (tags.get ('Key') = =' Name'): volume_name = tags.get ('Value') # description = volume_name +' volume snapshot is created at'+ i for snapshot in volume.snapshots.all (): match=re.findall (j Snapshot.description) if match: s=s+1 print (snapshot.description) snapshot.delete () print ('number of eligible snapshots' + str (s))

In order to facilitate the execution results of the test function, it is recommended that you configure test events in the function page, so that you do not need to modify the trigger frequently to complete the trigger.

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report