Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

The method of using aws_s3_sdk for php on cloud disk

2025-04-06 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)06/01 Report--

This article mainly explains "the method of using aws_s3_sdk for php on cloud disk". The content of the explanation is simple and clear, and it is easy to learn and understand. Please follow the editor's train of thought to study and learn "the method of using aws_s3_sdk for php on cloud disk".

The full name of aws S3 is Simple Storage Service, which is object Storage Service (oss). The advantages of object Storage Service as a network disk will not be described in detail here. It provides a unified interface sdk that covers almost all languages. In addition to versatility, the interface with the core of upload and download can meet almost all business needs. I have docked different sdk interfaces in several years of work, it can be said that the interface of aws S3, only you can not think of, without him can not do, although the premise is that you can find what you want from his book-like interface document. The following is a record of several important interfaces in S3 during the interfacing process. Object storage is privatized, and the following code is developed in php (which development language is not important, as long as the underlying language supports io multiplexing performance).

Connect

Access_key and secret_key are the information provided by each object store. Enter the object storage address of your own privatized deployment or other cloud service providers in endpoint.

Use Aws\ S3\ S3Clientbot use Aws\ Credentials\ Credentials;use Aws\ Exception\ AwsException;use Aws\ Exception\ MultipartUploadException;use Aws\ S3\ ObjectUploader;use Aws\ S3\ MultipartUploader;$this- > client = new S3Client (['endpoint' = > $endpoint,' region' = > 'us-east-1', / / requires any region' service_name' = >' s3climate, 'verify' = >' False', 'credentials' = > new Credentials ($aws_access_key_id, $aws_secret_access_key) , 'version' = >' latest'])

Upload

1. Realize simple file upload

$uploader = new ObjectUploader ($this- > client, $bucket, $key, $source); $result = $uploader- > upload ()

2. Multipart upload

If the file is large, the upload time will be slower and the code will be blocked in the upload section

Therefore, multipart upload can be provided through the API:

$source ='/ path/to/large/file.zip';$uploader = new MultipartUploader ($s3Client, $source, ['bucket' = >' your-bucket', 'key' = >' my-file.zip', 'before_initiate' = > function (\ Aws\ Command $command) {$command [' CacheControl'] = 'max-age=3600';}]))

The general principle is to cut a file and then upload it concurrently. This method has multiple parameters, which can control the size of each shard, and the number of shards uploaded at the same time, or callback operations such as before uploading, after uploading, and uploading failures. Very flexible.

3. Asynchronous multipart upload

Although the above multipart upload operation can reduce the upload time of large files, like some operations of uploading files from the web end to the server, you need to upload the entire file to the server first, and then upload the file to the OS end. In this way, if the server uses oss for file storage, even if the web uses multipart upload to the server, the server will have to wait for all multipart uploads to be completed, synthesize the final file, and then upload the file to OS, which will take more time to upload. To optimize multipart upload operation, you can use the asynchronous multipart upload feature provided by S3:

$res = $this- > client- > createMultipartUpload (array_merge (['Bucket' = > $bucket, / / REQUIRED' Key' = > $key, / / REQUIRED 'Expires' = > $expire,], $args); return $res

The above code provides a multipart upload task and returns a upload_id, so that when the web side uploads a part of a file, it can be uploaded to the OS side first according to the upload_id, and the oss side will temporarily save the parts in memory (you need to enter PartNumber according to the order of the parts):

$res = $this- > client- > uploadPart (['Body' = > $source,' Bucket' = > $bucket, / / REQUIRED 'Key' = > $key, / / REQUIRED' PartNumber' = > $PartNumber, / / REQUIRED 'UploadId' = > $UploadId, / / REQUIRED]); return $res

When all the shards are uploaded to the web and all the shards are uploaded to the OS, call the API to complete the upload:

$res = $this- > client- > completeMultipartUpload (['Bucket' = > $bucket, / / REQUIRED' Key' = > $key, / / REQUIRED 'MultipartUpload' = > $MultipartUpload,' UploadId' = > $UploadId, / / REQUIRED]); return $res

In this way, the concurrent upload of web and server oss can be realized, and a lot of upload time can be saved.

The above are the main functions of upload, and each interface also provides a large number of parameters and callback methods. Many of the main parameters and callbacks are not needed now, so I will not enumerate them here.

download

The download function of php sdk is relatively simple compared with python, but it can also be downloaded in parts. There is only one API for downloading files:

$res = $this- > client- > getObject (array_merge (['Bucket' = > $bucket,' Key' = > $key, 'Range' = >' bytes=0-9, 'SaveAs' = >' / path/to/save/file'], $args)); return $res

One of the more important parameters is the range parameter, which implements the function of resuming the breakpoint of the browser. When the browser requests a file to download, if the file is stored in oss, normally you need to download the file first, and then let the browser download it in pieces. However, by passing the range parameter, the corresponding fragments of the file are downloaded from the oss, and then sent to the web side, realizing the continuous download from the oss side to the web side without blocking.

Paging query

A normal paging query can use listObjects:

$result = $this- > client- > listObjects (['Bucket' = >', / / REQUIRED 'Delimiter' = >'', 'EncodingType' = >' url', 'ExpectedBucketOwner' = >'', 'Marker' = >', 'MaxKeys' = >,' Prefix' = >',])

Use MaxKeys as the number of pages, Marker as the last file in one page of each request, and marker as key to the next page request, similar to the role of offset.

In fact, because oss has no paging function, such requests can only be queried from the first page to the number of pages you want to query, wasting a lot of resources. So you can do this through the paging iterator provided by oss:

$args ['Bucket'] = $bucket;$args [' MaxKeys'] = $limit;$results = $this- > client- > getPaginator ('ListObjects', $args); $total_results = []; for ($I = 0; $I

< $offset; $i++) { if ($results->

Valid () {$results- > next ();} else {break;}} if ($results- > valid ()) {array_push ($total_results, $results- > current ()); $results- > next ();} return $total_results

Iterator is almost all languages have implemented the function, its role is to save the code context state, such as the above code, through the iterator to form a paging tool, when using next to page, the underlying code does not need to request the interface, but through the pointer to the next page, so that you can quickly turn the page to get the corresponding number of pages of the file.

Thank you for your reading. The above is the content of "the method of using aws_s3_sdk for php on cloud disk". After the study of this article, I believe you have a deeper understanding of the method of using aws_s3_sdk for php on cloud disk, and the specific use needs to be verified in practice. Here is, the editor will push for you more related knowledge points of the article, welcome to follow!

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report