Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

How to use large file upload: second upload, breakpoint continuation upload, multipart upload method

2025-04-11 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Development >

Share

Shulou(Shulou.com)06/03 Report--

This article introduces the relevant knowledge of "how to use large file upload: second upload, breakpoint upload, fragment upload method". In the operation of actual cases, many people will encounter such a dilemma. Next, let the editor lead you to learn how to deal with these situations! I hope you can read it carefully and be able to achieve something!

Second pass

1. What is the second pass

To put it popularly, if you upload something to upload, the server will first do the MD5 check. If there is the same thing on the server, it will directly give you a new address. In fact, what you download is the same file on the server, and you want to upload it without seconds. In fact, as long as you let MD5 change, it is to modify the file itself (not to change the name). For example, if you add a few more words to a text file, MD5 will change, and you will not pass it in seconds.

2. The core logic of second transmission realized in this paper.

A. Use the set method of redis to store the file upload status, where key is the md5,value of file upload and the flag bit of whether the upload is completed.

B. When the flag bit true indicates that the upload has been completed, if the same file is uploaded, the second upload logic will be entered. If the flag bit is false, the upload has not been completed. In this case, you need to call the set method to save the path of the block number file record, where key is the uploaded file md5 with a fixed prefix, and value is the block number file record path.

Multipart upload

1. What is multipart upload

Multipart upload is to divide the file to be uploaded into multiple data blocks (we call it Part) according to a certain size to upload separately. After uploading, the server collects and integrates all the uploaded files into the original file.

two。 The scene of multipart upload

1. Upload large files

two。 The network environment is not good, and there are scenarios that require retransmission risk.

Breakpoint continuation

1. What is breakpoint continuation

When downloading or uploading, the download or upload task (a file or a compressed package) is artificially divided into several parts, each of which is uploaded or downloaded by a thread. If you encounter a network failure, you can continue to upload or download the unfinished part from the part that has already been uploaded or downloaded, and there is no need to upload or download from scratch. The breakpoint continuation of this article is mainly aimed at the breakpoint upload scenario.

2. Application scenarios

Resuming upload from breakpoint can be regarded as a derivative of multipart upload, so you can use multipart upload scenarios and resume upload using breakpoint.

3. Realize the core logic of breakpoint continuation.

In the process of multipart upload, if the upload is interrupted due to abnormal factors such as system crash or network outage, the client needs to record the progress of the upload. When uploading again is supported later, you can continue to upload from the place where the last upload was interrupted.

In order to avoid the problem that the progress data of the client is deleted after the upload, the server can also provide the corresponding interface to facilitate the client to query the uploaded multipart data, so as to let the client know the uploaded multipart data and continue uploading from the next multipart data.

4. Implement the process steps

A, option 1, general steps

Divide the files that need to be uploaded into blocks of the same size according to certain segmentation rules

Initialize a multipart upload task and return the unique ID of this multipart upload

Send each shard data block according to a certain strategy (serial or parallel)

After the transmission is completed, the server judges whether the data upload is complete or not, and if it is complete, the data block is synthesized to get the original file.

B. Plan 2. The steps of this paper.

The front end (client) needs to slice the file according to the fixed size, and the back end (server) should be requested with the sequence number and size of the part.

The server creates a conf file to record the part location. The length of the conf file is the total number of parts. For each part uploaded, a 127is written to the conf file, so the location that has not been uploaded is 0 by default, and the one that has been uploaded is Byte.MAX_VALUE 127.This is the core step to achieve breakpoint continuation and second upload.

The server calculates the start position according to the fragment sequence number and the size of each part given in the request data (the size of the part is fixed and the same), and writes the file with the read file fragment data.

5. Implementation of multipart upload / breakpoint upload code

A, the front end uses the plug-in of webuploader provided by Baidu for slicing. This article mainly introduces the server code implementation and how to fragment webuploader. For the specific implementation, please see the following link:

Http://fex.baidu.com/webuploader/getting-started.html

B. The backend implements file writing in two ways. One is to use RandomAccessFile. If you are not familiar with RandomAccessFile, you can check the following link:

Https://blog.csdn.net/dimudan2015/article/details/81910690

The other is to use MappedByteBuffer. For friends who are not familiar with MappedByteBuffer, you can check the following link to learn about it:

Https://www.jianshu.com/p/f90866dcbffc

The core code for writing operations at the back end

A, RandomAccessFile implementation method

@ UploadMode (mode = UploadModeEnum.RANDOM_ACCESS) @ Slf4j public class RandomAccessUploadStrategy extends SliceUploadTemplate {@ Autowired private FilePathUtil filePathUtil; @ Value ("${upload.chunkSize}") private long defaultChunkSize; @ Override public boolean upload (FileUploadRequestDTO param) {RandomAccessFile accessTmpFile = null; try {String uploadDirPath = filePathUtil.getPath (param); File tmpFile = super.createTmpFile (param) AccessTmpFile = new RandomAccessFile (tmpFile, "rw"); / / this must be consistent with the value set at the front end long chunkSize = Objects.isNull (param.getChunkSize ())? DefaultChunkSize * 1024 * 1024: param.getChunkSize (); long offset = chunkSize * param.getChunk (); / / offset to the shard accessTmpFile.seek (offset); / / write the shard data accessTmpFile.write (param.getFile (). GetBytes ()); boolean isOk = super.checkAndSetUploadProgress (param, uploadDirPath); return isOk } catch (IOException e) {log.error (e.getMessage (), e);} finally {FileUtil.close (accessTmpFile);} return false;}}

B, MappedByteBuffer implementation method

@ UploadMode (mode = UploadModeEnum.MAPPED_BYTEBUFFER) @ Slf4j public class MappedByteBufferUploadStrategy extends SliceUploadTemplate {@ Autowired private FilePathUtil filePathUtil; @ Value ("${upload.chunkSize}") private long defaultChunkSize; @ Override public boolean upload (FileUploadRequestDTO param) {RandomAccessFile tempRaf = null; FileChannel fileChannel = null; MappedByteBuffer mappedByteBuffer = null; try {String uploadDirPath = filePathUtil.getPath (param) File tmpFile = super.createTmpFile (param); tempRaf = new RandomAccessFile (tmpFile, "rw"); fileChannel = tempRaf.getChannel (); long chunkSize = Objects.isNull (param.getChunkSize ())? DefaultChunkSize * 1024 * 1024: param.getChunkSize (); / / write the sharded data long offset = chunkSize * param.getChunk (); byte [] fileData = param.getFile () .getBytes (); mappedByteBuffer = fileChannel .map (FileChannel.MapMode.READ_WRITE, offset, fileData.length); mappedByteBuffer.put (fileData); boolean isOk = super.checkAndSetUploadProgress (param, uploadDirPath); return isOk } catch (IOException e) {log.error (e.getMessage (), e);} finally {FileUtil.freedMappedByteBuffer (mappedByteBuffer); FileUtil.close (fileChannel); FileUtil.close (tempRaf);} return false;}}

C, file operation core template class code

@ Slf4j public abstract class SliceUploadTemplate implements SliceUploadStrategy {public abstract boolean upload (FileUploadRequestDTO param); protected File createTmpFile (FileUploadRequestDTO param) {FilePathUtil filePathUtil = SpringContextHolder.getBean (FilePathUtil.class); param.setPath (FileUtil.withoutHeadAndTailDiagonal (param.getPath (); String fileName = param.getFile () .getOriginalFilename (); String uploadDirPath = filePathUtil.getPath (param); String tempFileName = fileName + "_ tmp"; File tmpDir = new File (uploadDirPath) File tmpFile = new File (uploadDirPath, tempFileName); if (! tmpDir.exists ()) {tmpDir.mkdirs ();} return tmpFile;} @ Override public FileUploadDTO sliceUpload (FileUploadRequestDTO param) {boolean isOk = this.upload (param); if (isOk) {File tmpFile = this.createTmpFile (param) FileUploadDTO fileUploadDTO = this.saveAndFileUploadDTO (param.getFile (). GetOriginalFilename (), tmpFile); return fileUploadDTO;} String md5 = FileMD5Util.getFileMD5 (param.getFile ()); Map map = new HashMap (); map.put (param.getChunk (), md5); return FileUploadDTO.builder (). ChunkMd5Info (map). Build () } / * check and modify file upload progress * / public boolean checkAndSetUploadProgress (FileUploadRequestDTO param, String uploadDirPath) {String fileName = param.getFile (). GetOriginalFilename (); File confFile = new File (uploadDirPath, fileName + ".conf"); byte isComplete = 0; RandomAccessFile accessConfFile = null; try {accessConfFile = new RandomAccessFile (confFile, "rw") / / marking the segment as true indicates that the System.out.println is completed ("set part" + param.getChunk () + "complete"); / / the length of the conf file is the total number of parts, and each part uploaded will write a 127th to the conf file. Then the location that is not uploaded is 0 by default, and the uploaded file is Byte.MAX_VALUE 127accessConfFile.setLength (param.getChunks ()). AccessConfFile.seek (param.getChunk ()); accessConfFile.write (Byte.MAX_VALUE); / / completeList checks whether all the fragments are completed. If all the fragments are uploaded successfully, byte [] completeList = FileUtils.readFileToByteArray (confFile); isComplete = Byte.MAX_VALUE; for (int I = 0; I < completeList.length & & isComplete = = Byte.MAX_VALUE) IsComplete is not Byte.MAX_VALUE isComplete = (byte) (isComplete & completeList [I]); System.out.println ("check part" + I + "complete?:" + completeList [I]);} catch (IOException e) {log.error (e.getMessage (), e) } finally {FileUtil.close (accessConfFile);} boolean isOk = setUploadProgress2Redis (param, uploadDirPath, fileName, confFile, isComplete); return isOk;} / * * store upload progress information in redis * / private boolean setUploadProgress2Redis (FileUploadRequestDTO param, String uploadDirPath, String fileName, File confFile, byte isComplete) {RedisUtil redisUtil = SpringContextHolder.getBean (RedisUtil.class) If (isComplete = = Byte.MAX_VALUE) {redisUtil.hset (FileConstant.FILE_UPLOAD_STATUS, param.getMd5 (), "true"); redisUtil.del (FileConstant.FILE_MD5_KEY + param.getMd5 ()); confFile.delete (); return true } else {if (! redisUtil.hHasKey (FileConstant.FILE_UPLOAD_STATUS, param.getMd5 () {redisUtil.hset (FileConstant.FILE_UPLOAD_STATUS, param.getMd5 (), "false"); redisUtil.set (FileConstant.FILE_MD5_KEY + param.getMd5 (), uploadDirPath + FileConstant.FILE_SEPARATORCHAR + fileName + ".conf");} return false }} / * Save file operation * / public FileUploadDTO saveAndFileUploadDTO (String fileName, File tmpFile) {FileUploadDTO fileUploadDTO = null; try {fileUploadDTO = renameFile (tmpFile, fileName); if (fileUploadDTO.isUploadComplete ()) {System.out .println ("upload complete!" + fileUploadDTO.isUploadComplete () + "name=" + fileName) / / TODO saves file information to database}} catch (Exception e) {log.error (e.getMessage (), e);} finally {} return fileUploadDTO } / * File rename * * @ param toBeRenamed the file to be renamed * @ param toFileNewName New name * / private FileUploadDTO renameFile (File toBeRenamed, String toFileNewName) {/ / check whether the file to be renamed exists and whether it is a file FileUploadDTO fileUploadDTO = new FileUploadDTO () If (! toBeRenamed.exists () | | toBeRenamed.isDirectory () {log.info ("File does not exist: {}", toBeRenamed.getName ()); fileUploadDTO.setUploadComplete (false); return fileUploadDTO;} String ext = FileUtil.getExtension (toFileNewName); String p = toBeRenamed.getParent (); String filePath = p + FileConstant.FILE_SEPARATORCHAR + toFileNewName; File newnewFile = newFile (filePath) / / modify the file name boolean uploadFlag = toBeRenamed.renameTo (newFile); fileUploadDTO.setMtime (DateUtil.getCurrentTimeStamp ()); fileUploadDTO.setUploadComplete (uploadFlag); fileUploadDTO.setPath (filePath); fileUploadDTO.setSize (newFile.length ()); fileUploadDTO.setFileExt (ext); fileUploadDTO.setFileId (toFileNewName); return fileUploadDTO This is the content of "how to use large file upload: second upload, breakpoint upload, multipart upload method". Thank you for your reading. If you want to know more about the industry, you can follow the website, the editor will output more high-quality practical articles for you!

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 265

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Development

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report