In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-03-28 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >
Share
Shulou(Shulou.com)06/02 Report--
This article mainly explains "how to upload files by vue". Friends who are interested may wish to have a look. The method introduced in this paper is simple, fast and practical. Now let the editor take you to learn "how to upload files with vue"!
Why use Vue-Simple-Uploader
Recently, I used Vue + Spring Boot to complete the file upload operation, stepped on some holes, compared some components of Vue, and found a very useful component-- Vue-Simple-Uploader.
Let's talk about why this component is chosen. Compared with the upload component of vue-ant-design and element-ui, it can do more things, such as:
Upload can be paused and resumed
Upload queue management, supporting maximum concurrent upload
Multipart upload
Support for operations such as progress, estimation of remaining time, automatic retry on error, retransmission, etc.
Support "fast transfer", through the file to determine whether the server already exists in order to achieve "fast transfer"
Since breakpoint continuation is needed in the requirement, this component is selected. I will start with the most basic upload:
Single file upload, multiple file upload, folder upload
Vue Code:
Select File Select folder
This component supports multiple file uploads by default. Here, we paste this code from the official demo, and then configure the upload path in uploadOption1. You can select a folder for upload by setting the directory attribute in uploader-btn.
UploadOption1:
UploadOptions1: {target: "/ / localhost:18080/api/upload/single", / / API for upload: testChunks: false, / / whether to enable server sharding verification fileParameterName: "file", / / default file parameter name headers: {}, query () {}, categaryMap: {/ / the type used to restrict upload image: ["gif" "jpg", "jpeg", "png", "bmp"}}
In the writing of the interface in the background, for convenience, we define a chunk class that is used to receive some parameters of the default transmission of the component, which are convenient for the continuation of the breakpoint in blocks:
Chunk class
@ Datapublic class Chunk implements Serializable {private static final long serialVersionUID = 7073871700302406420L; private Long id; / * * current file block, starting from 1 * / private Integer chunkNumber; / * chunk size * / private Long chunkSize; / * * current chunk size * / private Long currentChunkSize; / * * Total size * / private Long totalSize / * * File ID * / private String identifier; / * File name * / private String filename; / * relative path * / private String relativePath; / * Total number of blocks * / private Integer totalChunks; / * File Type * / private String type / * File to be uploaded * / private MultipartFile file;}
When writing the interface, we can directly use this class as a parameter to receive the parameters from vue-simple-uploader. Note that we need to use POST to receive yo ~
API method:
@ PostMapping ("single") public void singleUpload (Chunk chunk) {/ / get the incoming file MultipartFile file = chunk.getFile (); / / get the file name String filename = chunk.getFilename (); try {/ / get the contents of the file byte [] bytes = file.getBytes () / / SINGLE_UPLOADER is a path constant I defined, which means that if the directory does not exist, create if (! Files.isWritable (Paths.get (SINGLE_FOLDER) {Files.createDirectories (Paths.get (SINGLE_FOLDER) } / / get the path to the uploaded file Path path = Paths.get (SINGLE_FOLDER,filename); / / write bytes to the file Files.write (path, bytes);} catch (IOException e) {e.printStackTrace ();}}
It should be noted here that if the file is too large, the Spring Boot background will report an error.
Org.apache.tomcat.util.http.fileupload.FileUploadBase$FileSizeLimitExceededException: The field file exceeds its maximum permitted size of 1048576 bytes.
At this point, you need to configure the maximum received file size of servlet in application.yml (default sizes are 1MB and 10MB)
Spring: servlet: multipart: max-file-size: 10MB max-request-size: 100MB
Next, we start the project, and we can see the effect by selecting the file that needs to be uploaded. ~ but the other components of the same thing can basically do the same thing. This is more because it can support breakpoint multipart upload, realize that the upload process is interrupted by the network, and if you connect to the network again, you can continue to upload in seconds from the location of the breakpoint. Next, let's see how breakpoint resumes play.
Breakpoint block continuation
First, let's talk about the general principle of multipart breakpoint resuming. We can configure the size of a part in the component. Files larger than this value will be divided into several parts to upload, and the chunkNumber of the part will be saved to the database (Mysql or Redis. Here I choose Redis).
The component will upload with a parameter of identifier (here I use the default value, and you can also re-assign the parameter by generating md5). Use identifier as the key of Redis and set hashKey to "chunkNumber". Value is a collection of Set composed of chunkNumber uploaded each time.
After setting the value of testChunk in uploadOption to true, the component will first send a get request to get the uploaded chunkNumber collection, then determine whether the fragment exists in the checkChunkUploadedByResponse method to skip it, and send post request to upload the multipart file.
Each time the fragment is uploaded, the service layer returns the current collection size and compares it with the totalChunks in the parameters. If it is found to be equal, it returns a status value to control the frontend to issue a merge request to combine the parts just uploaded into a file, and the breakpoint multipart upload of the file is completed.
Here is the corresponding code ~
Vue Code:
Multipart upload
Code that verifies whether it has been uploaded
UploadOptions2: {target: "/ / localhost:18080/api/upload/chunk", chunkSize: 1 * 1024 * 1024, testChunks: true, checkChunkUploadedByResponse: function (chunk, message) {let objMessage = JSON.parse (message); / / get the collection of current upload blocks let chunkNumbers = objMessage.chunkNumbers / / determine whether the current block is included in the collection, and then determine whether to skip return (chunkNumbers | | []) .indexOf (chunk.offset + 1) > = 0 }, headers: {}, query () {}, categaryMap: {image: ["gif", "jpg", "jpeg", "png", "bmp"], zip: ["zip"], document: ["csv"]}}
Successful processing after uploading, judging the status to perform merge operation
OnFileSuccess2 (rootFile, file, response, chunk) {let res = JSON.parse (response); / / if (res.code = = 1) {return } / / need to merge if (res.code = = 205) {/ / send merge request with parameters identifier and filename. Note that this needs to correspond to the parameter name in the backend Chunk class, otherwise it will not receive ~ const formData = new FormData (); formData.append ("identifier", file.uniqueIdentifier); formData.append ("filename", file.name) Merge (formData) .then (response = > {});}
To determine whether the code exists, note that here is the GET request!
@ GetMapping ("chunk") public Map checkChunks (Chunk chunk) {return uploadService.checkChunkExits (chunk);} @ Override public Map checkChunkExits (Chunk chunk) {Map res = new HashMap (); String identifier = chunk.getIdentifier (); if (redisDao.existsKey (identifier)) {Set chunkNumbers = (Set) redisDao.hmGet (identifier, "chunkNumberList"); res.put ("chunkNumbers", chunkNumbers) } return res;}
Save the chunks and save the data to the Redis code. This is the POST request!
@ PostMapping ("chunk") public Map saveChunk (Chunk chunk) {/ / the operation here is basically the same as saving a single paragraph ~ MultipartFile file = chunk.getFile (); Integer chunkNumber = chunk.getChunkNumber (); String identifier = chunk.getIdentifier (); byte [] bytes; try {bytes = file.getBytes () / / the difference here is that when saving parts, the file name is saved according to-chunkNumber Path path = Paths.get (generatePath (CHUNK_FOLDER, chunk)); Files.write (path, bytes);} catch (IOException e) {e.printStackTrace () } / / the operation here is to save to redis and return the size of the collection Integer chunks = uploadService.saveChunk (chunkNumber, identifier); Map result = new HashMap () / / if the size of the collection is equal to that of totalChunks, determine that the part has been uploaded, and perform merge operation if (chunks.equals (chunk.getTotalChunks () {result.put ("message", "upload successful!") ; result.put ("code", 205);} return result;} / * generate the file path of the part * / private static String generatePath (String uploadFolder, Chunk chunk) {StringBuilder sb = new StringBuilder () / / stitching the upload path sb.append (uploadFolder) .append (File.separator) .append (chunk.getIdentifier ()); / / determine whether the uploadFolder/identifier path exists. If it does not exist, create if (! Files.isWritable (Paths.get (sb.toString () {try {Files.createDirectories (Paths.get (sb.toString () } catch (IOException e) {log.error (e.getMessage (), e) }} / / returns a-isolated chunk file, followed by chunkNumber to sort merge return sb.append (File.separator) .append (chunk.getFilename ()) .append ("-") .append (chunk.getChunkNumber ()) .append (). } / * Save the information to Redis * / public Integer saveChunk (Integer chunkNumber, String identifier) {/ / get the current chunkList Set oldChunkNumber = (Set) redisDao.hmGet (identifier, "chunkNumberList") / / if the fetch is empty, create a new Set collection and add the chunkNumber of the current part to Redis if (Objects.isNull (oldChunkNumber)) {Set newChunkNumber = new HashSet (); newChunkNumber.add (chunkNumber); redisDao.hmSet (identifier, "chunkNumberList", newChunkNumber); / / return the size of the collection return newChunkNumber.size () } else {/ / if not empty, add the chunkNumber of the current part to the current chunkList and store it in Redis oldChunkNumber.add (chunkNumber); redisDao.hmSet (identifier, "chunkNumberList", oldChunkNumber); / / return the size of the collection return oldChunkNumber.size ();}}
Merged background code:
PostMapping ("merge") public void mergeChunks (Chunk chunk) {String fileName = chunk.getFilename (); uploadService.mergeFile (fileName,CHUNK_FOLDER + File.separator + chunk.getIdentifier ()) } @ Override public void mergeFile (String fileName, String chunkFolder) {try {/ / if the merged path does not exist, create a new if (! Files.isWritable (Paths.get (mergeFolder) {Files.createDirectories (Paths.get (mergeFolder)) } / / merged file name String target = mergeFolder + File.separator + fileName; / / create file Files.createFile (Paths.get (target)) / / traverse the divided folder After filtering and sorting, write to the merged file Files.list (Paths.get (chunkFolder)) / / filter the file with "-" .filter (path-> path.getFileName (). ToString (). Contains ("-")) / / sort from small to large Sorted ((o1) O2)-> {String p1 = o1.getFileName () .toString () String p2 = o2.getFileName (). ToString (); int i1 = p1.lastIndexOf ("-"); int i2 = p2.lastIndexOf ("-"); return Integer.valueOf (p2.substring (i2)) .compareto (Integer.valueOf (p1.substring (i1) }) .forEach (path-> {try {/ / write the file Files.write (Paths.get (target), Files.readAllBytes (path), StandardOpenOption.APPEND) as an append / / delete the block Files.delete (path) after merging;} catch (IOException e) {e.printStackTrace ();}}) } catch (IOException e) {e.printStackTrace ();}}
At this point, our breakpoint continuation is over perfectly. I have uploaded the complete code to gayhub~, Welcome star fork pr~ (I will also upload the blog post to gayhub later).
Front end: https://github.com/viyog/viboot-front
Background: https://github.com/viyog/viboot
At this point, I believe you have a deeper understanding of "how to upload files with vue". You might as well do it in practice. Here is the website, more related content can enter the relevant channels to inquire, follow us, continue to learn!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.