In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-04-05 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Network Security >
Share
Shulou(Shulou.com)05/31 Report--
In this issue, the editor will bring you about how to bypass and use Bucket's upload strategy and URL signature. The article is rich in content and analyzes and narrates it from a professional point of view. I hope you can get something after reading this article.
Brief introduction
Bucket upload strategy is a convenient way to upload data directly from the client to Bucket (storage space). Through the rules in the upload policy and the logic associated with accessing certain files, we will show how to get a complete list of Bucket objects while being able to modify or delete existing files in Bucket.
What is Bucket strategy?
(if you already know what Bucket policy and URL signature are, you can jump directly to the "exploit" section below.)
The Bucket policy is a secure way to upload content directly to large cloud-based stores such as Google cloud storage or AWS S3. The idea is to create a policy that defines whether files are allowed to be uploaded, then sign the policy with a key, and submit the policy and signature to the client.
The client can then upload the file directly to the Bucket,Bucket store and verify that the uploaded content and policy match. If there is a match, upload the file.
Upload strategy and URL pre-signature
Before we begin, we need to make it clear that there are several ways to access objects in Bucket. When using POST to request access to Bucket, POST policy (AWS) and POST object (Google Cloud Storage) only allow content to be uploaded.
Another approach called URL pre-signature (AWS) or URL signature (Google cloud storage) is not just about modifying objects. Whether we can use the default private object of PUT, DELETE, or GET depends on how HTTP is defined by the pre-signed logic.
When defining content types (Content-Type), access controls, and file uploads, URL pre-signatures are relatively lenient compared to POST policies. Using the wrong custom logic also performs URL signatures more frequently, as shown below.
There are many ways to allow someone to access the uploaded content, one of which is AssumeRoleWithWebIdentity, similar to the POST policy, except that you can obtain temporary security credentials (ASIA *) created by a predefined IAM Role (identity and access management role).
How to discover upload policies or URL signatures
This is an upload request using the POST method, as shown below:
This policy uses ba64-encoded JSON, as follows:
{"expiration": "2018-07-31T13:55:50Z", "conditions": [{"bucket": "bucket-name"}, ["starts-with", "$key", "acc123"], {"acl": "public-read"}, {"success_action_redirect": https://dashboard.example.com/"}, ["starts-with", "$Content-Type", ""], ["content-length-range", 0 524288]]}
The URL signature on AWS S3 is similar to the following:
Https://bucket-name.s3.amazonaws.com/?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIA...
Just like Google Cloud Storage:
HTTPS: / / use of storage.googleapis.com/uploads/images/test.png?Expires=1515198382&GoogleAccessId=example%40example.iam.gserviceaccount.com&Signature=dlMA--- upload strategy
If we want to find errors in the policy and take advantage of them, we need to define some different attributes:
L Access = Yes- whether we can access the file in some way after uploading. Whether ACL is defined as public-read or can receive URL pre-signatures of uploaded files in the policy. Objects that are uploaded in the policy but have no ACL defined are private by default.
L Inline=Yes- if you can modify the content-disposition file, then we can provide the content inline. If it is not defined at all in the policy, the file is provided inline.
1. Starts-with $key is empty
Example:
["starts-with", "$key", ""]
You can upload files to any location in Bucket, overwriting any object. You can set the key property to anything and accept the policy.
Note: in some cases, this kind of utilization is difficult. For example, there is only one Bucket for uploading objects named UUID (Universal unique Identifier) that have never been made public or will be used later. In this case, we don't know which files to overwrite, and we don't know the names of other objects in the Bucket.
2. Starts-with $key does not contain path delimiters or uses the same path for all users
Example:
["starts-with", "$key", "acc_1322342m3423"]
If the $key part of the policy contains a defined part, but there is no path separator, we can place the content directly in the root directory of the Bucket. If Access=Yes and Inline=Yes, depending on the type of content-type (see # 3 and # 4), we can steal the URL uploaded by other users by installing AppCache-manifest (the vulnerabilities in AppCache were discovered by me and @ avlidienbrunn and @ filedescriptor, respectively).
This problem also applies if the path of the uploaded object is the same for all users.
3. Starts-with $Content-Type is empty
Example:
["starts-with", "$Content-Type", ""]
If Access=Yes and Inline=Yes, we can upload text/html in the Bucket domain and provide this service, as shown in # 2, we can use it to run javascript or install AppCache-manifest on this path, which means that all files accessed under this path will be leaked to the attacker.
4. Use starts-with $Content-Type to define content types
Example:
["starts-with", "$Content-Type", "image/jpeg"]
Just like # 3, we can add some content to make the first content type an unknown mime type, then append text/html, and the file will be recognized as the text/html type:
Content-type: image/jpegz;text/html
In addition, by leveraging the above strategy, we can also run javascript on the domain by uploading the HTML file if the S3-Bucket is hosted in a subdomain of the company.
The most interesting part is to take advantage of the site by uploading content on the sandbox domain.
Using custom logic to take advantage of URL signature
URL signatures are signed on the server side and submitted to the client to allow them to upload, modify, or access content. The most common problem is for websites to build custom logic to retrieve them.
First of all, to understand how to take advantage of a signed URL, it is important to know how to get a signed GET-URL in the Bucket root directory that displays a list of Bucket files by default. This is basically the same as using a public list Bucket, except that this Bucket must contain other users' private data.
Remember, when we know about other files in Bucket, we can also request URL signatures for them, which gives us access to private files.
Therefore, our goal is always to try to get the root directory or another known file.
Example of incorrect custom logic
Here are some examples where the logic actually exposes the root path of the Bucket by issuing a signed GET-URL.
1. Use get-image as an endpoint for full readable access to Bucket
There are the following requirements:
Https://freehand.example.com/api/get-image?key=abc&document=xyz
The following URL signatures are provided:
Https://prodapp.s3.amazonaws.com/documents/648475/images/abc?X-Amz-Algorithm=AWS4-HMAC-SHA256...
However, the endpoint normalizes the URL before signing, so by traversing the path, we can actually point to the root of the Bucket:
Https://freehand.example.com/api/get-image?key=../../../&document=xyz
Results:
Https://prodapp.s3.amazonaws.com/?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIA...
This URL provides a list of all the files in the Bucket.
two。 Because the regular expression parses the URL signature request, the read permission can be obtained completely.
This is another example, and the following request is to get the URL signature of the desired object on the endpoint on the website:
POST / api/file_service/file_upload_policies/s3_url_signature.jsonHTTP/1.1Host: sectest.example.com {"url": "https://example-bucket.s3.amazonaws.com/dir/file.png"}
It parses the URL and appends part of it to the URL signature, and you'll get this:
{"signedUrl": "https://s3.amazonaws.com/example-bucket/dir/file.png?X-Amz-Algorithm=AWS4-HMAC..."}"
You can access S3-bucket using subdomains and paths on s3.amazonaws.com. In this case, the server-side logical rules change URL to path-based Bucket URL.
By deceiving URL extraction, you can send the following:
{"url": "https://.x./example-bucket"}
It returns an URL signature, as shown below:
{"signedURL": "https://s3.amazonaws.com//example-beta?X-Amz-Algorithm=AWS4-HMAC..."}"
This URL displays a complete list of files for Bucket.
3. Abuse of temporary URL signature links
This example is from two years ago, and it was the first problem I found related to URL signatures.
On the site, when you upload a file, you first create a random key under secure.example.com:
POST / api/s3_file/ HTTP/1.1Host: secure.example.com {"id": null, "random_key": "abc-123-def-456-ghi-789", "s3_key": "/ file.jpg", "uploader_id": 71957, "employee_id": null}
Then you will return:
HTTP/1.1 CREATED {"employee_id": null, "s3_key": "/ file.jpg", "uploader_id": 71957, "random_key": "abc-123-def-456-ghi-789", "id": null}
This means that the following URL:
Https://secure.example.com/files/abc-123-def-456-ghi-789
It is then redirected to:
Location: https://example.s3.amazonaws.com/file.jpg?Signature=i0YZ...
You can then send the following s3_key content:
"random_key": "xx1234", "s3_key": "/
After that, there is the following URL:
Https://secure.example.com/files/xx1234
Redirect to:
Location: https://example.s3.amazonaws.com/?Signature=i0YZ...
At this point, I now have a list of their Bucket files. The site uses a Bucket to store all their data, including every document and file they own. When I tried to extract the list of files, I found that the Bucket was so large that the number of files could be calculated in millions. So I reported the loophole directly to the company, and here's their response:
Suggestion
A corresponding upload policy should be generated based on each file upload request or at least one for each user.
L $key should have a complete definition: a unique, random name and a random path.
It is best to define content-disposition as attachment.
L acl should prefer private or not define it.
L content-type should be explicitly set (do not use starts-with) or not set.
Also, never create an URL signature based on the user's request parameters, or the situation shown above will occur.
The worst thing I've ever seen is:
Https://secure.example.com/api/file_upload_policies/multipart_signature?to_sign=GET%0A%0A%0A%0Ax-amz-date%3AFri%2C%2009%20Mar%202018%2000%3A11%3A28%20GMT%0A%2Fbucket-name%2F&datetime=Fri,%2009%20Mar%202018%2000:11:28%20GMT
You did give it a request for your signature, and it replied to your request for signature:
0zfAa9zIBlXH76rTitXXXuhEyJI =
This can be used to make a request for a URL signature:
Curl-H "Authorization: AWSAKIAJAXXPZR2XXX7ZXXX:0zfAa9zIBlXH76rTitXXXuhEyJI="-H "x-amz-date:Fri, 09 Mar 2018 00:11:28 GMT" https://s3.amazonaws.com/bucket-name/
The same signature method not only applies to S3, it allows you to sign every request you want to any AWS service that AWS-key is allowed to use.
The above is how to bypass and make use of Bucket's upload strategy and URL signature. If you happen to have similar doubts, please refer to the above analysis to understand. If you want to know more about it, you are welcome to follow the industry information channel.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.