Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

How to upload large files with Vue+NodeJS

2025-03-26 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Development >

Share

Shulou(Shulou.com)05/31 Report--

In this article Xiaobian for you to introduce in detail "how to use Vue+NodeJS to achieve large file upload", the content is detailed, the steps are clear, the details are handled properly, I hope this "how to use Vue+NodeJS to achieve large file upload" article can help you solve your doubts, the following follow the editor's ideas slowly in-depth, together to learn new knowledge.

The common way to upload a file may be to new a FormData. After you put the file append into it, you can post it to the backend. However, if you use this way to upload large files, it is easy to cause the problem of upload timeout, and if you fail, you will have to start all over again. Users will not be able to refresh their browsers during the long waiting process, otherwise all previous efforts will be wasted. Therefore, this kind of problem is usually uploaded through slices.

Overall thinking

Cut a file into multiple small files

Hash calculates that you need to calculate the unique identity of a file so that the next time you upload it, you can filter out the remaining slices for upload.

After all slices are uploaded, notify the server to synthesize the slices

Notify the front-end file path of a successful upload

If the whole process fails, the next time you upload it, you can filter out the unpassed slices (breakpoint continuation) because the file hash has been calculated before. If the whole file has been uploaded, it does not need to be transferred (in seconds).

Project demonstration

Here, vue and node are used to build the front end and back end, respectively.

Front-end interface

FileUpload.vue

Upload files to calculate the progress of hash: upload progress:

File slicing

The file can be sliced and fileUpload.vue by using File.prototype.slice method.

Const CHUNK_SIZE=1024*1024// each slice is 1m import sparkMD5 from 'spark-md5' export default {name:'file-upload', data () {return {file:null,// uploaded file chunks: [], / / slice hashProgress:0 / / hash:''}}, methods: {async handleFileUpload (e) {if (! file) {return} this.file=file this.upload ()} / / File upload async upload () {/ / slice const chunks=this.createFileChunk (this.file) / /... / / hash calculation const hash=await this.calculateHash2 (chunks)}} / / File slice createFileChunk (size=CHUNK_SIZE) {const chunks= [] Let cur=0; const maxLen=Math.ceil (this.file.size/CHUNK_SIZE) while (cur= this.file.size)? This.file.size: start + CHUNK_SIZE; chunks.push ({index:cur,file:this.file.slice (start,end)}) cur++} return chunks},} hash calculation

Using md5, you can calculate the unique hash value of the file.

Here you can use the spark-md5 library to incrementally calculate the hash value of a file.

CalculateHash2 (chunks) {const spark=new sparkMD5.ArrayBuffer () let count = 0 const len=chunks.length let hash const self=this const startTime = new Date () .getTime () return new Promise ((resolve) = > {const loadNext=index= > {const reader=new FileReader () / / read file slices reader.readAsArrayBuffer (chunks [const reader=new FileReader] .file) reader.onload=function (e) { Const endTime=new Date (). GetTime () chunks [count] = {. Chunks [count] Time:endTime-startTime} count++ / / after reading successfully, use spark to do incremental calculation spark.append (e.target.result) if (count==len) {self.hashProgress=100 / / return the hash hash=spark.end () of the entire file. Resolve (hash)} else {/ / Update hash calculation progress self.hashProgress+=100/len loadNext (index+1)} loadNext (0)})}

You can see that the whole process is quite time-consuming and may lead to UI blocking (card), so we can optimize this process by means such as webwork, which we will discuss at the end.

Query slice status

After knowing the hash value of the file, we have to go to the backend to query the upload status of the file before uploading the slice. If it has already been uploaded, there is no need to upload it again. If only part of the file has been uploaded, then upload the slice that has not been uploaded (continue uploading at the breakpoint).

Front-end fileUpload.vue

/ /... methods: {/ /... async upload () {/ /. Slice and calculate whether the hashthis.hash=hash// query passes hash and suffix as parameters to this.$http.post ('/ checkfile', {hash, ext:this.file.name.split ('.). Pop ()}) .then (res= > {/ / API returns two values uploaded:Boolean indicating whether the whole file has been uploaded and which slices of uploadedList have been uploaded const {uploaded,uploadedList} = res.data / / if uploaded Directly prompt the user (second upload) if (uploaded) {return this.$message.success ('second upload successful')} / / here we agree that the name of each slice uploaded is hash+'-'+index this.chunks=chunks.map ((chunk,index) = > {const name=hash+'-'+index const isChunkUploaded= (uploadedList.includes (name))? whether the current slice of true:false// has uploaded return {hash Name, index, chunk:chunk.file, progress:isChunkUploaded?100:0// current slice upload progress If the upload is 100, otherwise it is 0, which is used to calculate the overall upload progress}}) / upload slice this.uploadChunks (uploadedList)})}}

File slice this.chunks

Server server/index.js

Const Koa=require ('koa') const Router=require (' koa-router') const koaBody = require ('koa-body'); const path=require (' path') const fse=require ('fs-extra') const app=new Koa () const router=new Router () / files are stored in public const UPLOAD_DIR=path.resolve (_ _ dirname,'public') app.use (koaBody ({multipart:true, / / support file upload})); router.post (' / checkfile',async (ctx) = > {const body=ctx.request.body) Const {ext,hash} = body / / synthetic file path file name hash.ext const filePath=path.resolve (UPLOAD_DIR, `${hash}. ${ext}`) let uploaded=false let uploadedList= [] / / determine whether the file has been uploaded if (fse.existsSync (filePath)) {uploaded=true} else {/ / all uploaded slices are stored in a folder The name is the hash value of the file uploadedList=await getUploadedList (path.resolve (UPLOAD_DIR,hash))} ctx.body= {code:0, data: {uploaded UploadedList}) async function getUploadedList (dirPath) {/ / all non-hidden files in the folder are read and returned to return fse.existsSync (dirPath)? (await fse.readdir (dirPath)) .filter (name= > name [0]! = ='.'): []} slice upload (resume upload from breakpoint)

After knowing the upload status of slices, you can select the slices that need to be uploaded to upload. Front-end fileUpload.vue

UploadChunks (uploadedList) {/ / each slice to be uploaded becomes a request const requests=this.chunks.filter (chunk= >! uploadedList.includes (chunk.name)) .map ((chunk,index) = > {const form=new FormData () / / all uploaded slices will be stored in a folder The folder name is the hash value of the file, so you need hash and name form.append ('chunk',chunk.chunk) form.append (' hash',chunk.hash) form.append ('name',chunk.name) / / because slices are not necessarily contiguous So index needs to take index return {form,index:chunk.index,error:0}}) / / all slices in the chunk object and upload them concurrently .map (({form,index})) = > {return this.$http.post ('/ uploadfile',form) {onUploadProgress:progress= > {this.chunks [index] .progress = Number (progress.loaded/progress.total) * 100) .toFixed (2) / / current slice upload progress}}) Promise.all (requests) .then ((res) = > {/ / send the request to the server merge file this.mergeFile ()})} after all requests are successful

Server side

Router.post ('/ uploadfile',async (ctx) = > {const body=ctx.request.body const file=ctx.request.files.chunk const {hash,name} = the path where the slices are stored in the folder const chunkPath=path.resolve (UPLOAD_DIR,hash) if (! fse.existsSync (chunkPath)) {await fse.mkdir (chunkPath)} / / move the files from the temporary path to the folder await fse.move (file.filepath) `${chunkPath} / ${name}`) ctx.body= {code:0, message: `slice uploaded successfully `}})

Location where slices are saved after upload

Overall upload progress of the file

The overall upload progress depends on the upload progress of each slice and the overall file size, which can be achieved by calculating attributes.

FileUpload.vue

UploaedProgress () {if (! this.file | |! this.chunks.length) {return 0} / / accumulate the uploaded parts of each slice const loaded= this.chunks.map (chunk= > {const size=chunk.chunk.size const chunk_loaded=chunk.progress/100*size return chunk_loaded}). Reduce ((acc,cur) = > acc+cur,0) return parseInt (loaded*100) / this.file.size) .tofixed (2)}, merge the files

Front-end fileUpload.vue

/ / File suffix to be passed to the server, slice size and hash value mergeFile () {this.$http.post ('/ mergeFile', {ext:this.file.name.split ('.). Pop (), size:CHUNK_SIZE, hash:this.hash}) .then (res= > {if (res & & res.data) {console.log (res.data)}

Server side

Router.post ('/ mergeFile',async (ctx) = > {const body=ctx.request.body const {ext,size,hash} = body / / final path to the file const filePath=path.resolve (UPLOAD_DIR, `${hash}. ${ext}`) await mergeFile (filePath,size,hash) ctx.body= {code:0, data: {url: `/ public/$ {hash}. ${ext}`}}) async function mergeFile (filePath,size Hash) {/ / folder address to save slices const chunkDir=path.resolve (UPLOAD_DIR,hash) / / read slices let chunks=await fse.readdir (chunkDir) / / slices to be merged sequentially Therefore, you need to sort chunks=chunks.sort ((aPowerb) = > a.split ('-') [1]-b.split ('-') [1]) / / the absolute path of the slice chunks=chunks.map (cpath= > path.resolve (chunkDir,cpath)) await mergeChunks (chunks,filePath,size)} / / read and write to the final path of the file function mergeChunks (files,dest,CHUNK_SIZE) {const pipeStream= (filePath,writeStream) = > {return new Promise ((resolve)). Reject) = > {const readStream=fse.createReadStream (filePath) readStream.on ('end', () = > {/ / after each slice is read, delete it fse.unlinkSync (filePath) resolve ()}) readStream.pipe (writeStream)})} const pipes=files.map ((file,index) = > {return pipeStream (file,fse.createWriteStream (dest)) {start:index*CHUNK_SIZE, end: (index+1) * CHUNK_SIZE})}) Return Promise.all (pipes)}

The function of uploading large file slices has been implemented, so let's take a look at the effect (here by the way, show the upload progress of a single slice)

You can see that due to a large number of slicing requests and concurrent uploads, although the browser itself has a limit on the number of concurrent requests (you can see that many requests are in pending status), it still causes stutters, so this process still needs to be optimized.

Optimize the control of the number of concurrent requests

FileUpload.vue

Upload piece by piece

This is also the most direct approach, which can be seen as the other extreme of concurrent requests. One is uploaded successfully and then the second is uploaded. Here, we have to deal with the error retry. If it fails three times in a row, the whole upload process will be terminated.

UploadChunks (uploadedList) {console.log (this.chunks) const requests=this.chunks.filter (chunk= >! uploadedList.includes (chunk.name)) .map ((chunk,index) = > {const form=new FormData () form.append ('chunk',chunk.chunk) form.append (' hash',chunk.hash) form.append ('name',chunk.name) return {form,index:chunk.index,error:0}) / / .map (({form) Index}) = > {/ / return this.$http.post ('/ uploadfile',form) {/ / onUploadProgress:progress= > {/ / this.chunks [index] .progress = Number (progress.loaded/progress.total) * 100) .tofixed (2) / /} / /}) / /}) / / console.log (requests) / / Promise.all (requests). Then ((res) = > {/ / console.log (res) / / this.mergeFile () / /}) const sendRequest= () = > {return new Promise ((resolve Reject) = > {const upLoadReq= (I) = > {const req=requests [I] const {form,index} = req this.$http.post ('/ uploadfile',form) {onUploadProgress:progress= > {this.chunks [index] .progress = Number (progress.loaded/progress.total) * 100) .toFixed (2)}}) .then (res= > {/ / the last piece uploaded successfully The whole process completes if (i==requests.length-1) {resolve () return} upLoadReq (iTun1)}) .catch (err= > {this.chunks [index] .progress =- 1 if (req.error {this.mergeFile ()})}

You can see that there is only one upload request at a time.

The resulting file

Multiple requests are concurrent

It is true that the problem of stutter can be solved by requesting one by one, but the efficiency is a little low, and we can achieve a limited number of concurrency on this basis.

In general, the idea of this kind of problem is to form a task queue. At the beginning, the specified number of concurrent request objects (assuming three) are taken from the requests to fill the queue and each begins to request the task. After the end of each task, the task is closed and queued, and then an element from request is added to the queue and executed, until the requests is cleared. Here, if a certain request fails, it will be stuffed into the head of the request queue. In this way, the next execution can start with this request and achieve the purpose of retrying.

Async uploadChunks (uploadedList) {console.log (this.chunks) const requests=this.chunks.filter (chunk= >! uploadedList.includes (chunk.name)) .map ((chunk,index) = > {const form=new FormData () form.append ('chunk',chunk.chunk) form.append (' hash',chunk.hash) form.append ('name',chunk.name) return {form,index:chunk.index,error:0}}) const sendRequest= (limit=1 Task= []) = > {let count=0 / / is used to record the number of successful requests when it is equal to len- 1. All slices have been uploaded successfully let isStop=false / / marking error condition If the number of errors in a piece is greater than 3, the entire task flag fails and other concurrent requests are not recursively performing const len=requests.length return new Promise (resolve Reject) = > {const upLoadReq= () = > {if (isStop) {return} const req=requests.shift () if (! req) {return} const {form,index} = req this.$http.post ('/ uploadfile',form) {onUploadProgress:progress= > {this.chunks [index] .progress = Number (progress.loaded/progress.total) * 100.toFixed (2)}}) .then (res= > {/ / the last piece of if (count==len-1) ) {resolve ()} else {count++ upLoadReq ()}) .catch (err= > {this.chunks [index] .progress =-1 if (req.error0)) {/ / the simulation forms a queue Recursively execute the next task upLoadReq () limit--}})} sendRequest (3). Then (res= > {console.log (res) this.mergeFile ()})}, optimize the hash value calculation

In addition to the unexpected control of request concurrency, the calculation of hash value also needs attention. Although we have adopted the method of incremental calculation, we can see that it is still time-consuming and may block UI.

WebWork

This is equivalent to opening an extra thread, allowing the hash calculation to be calculated in a new thread, and then notifying the main thread of the result.

CalculateHashWork (chunks) {return new Promise ((resolve) = > {/ / this js must pass in ready-made this.worker.postMessage ({chunks}) this.worker.onmessage=e= > {/ / the progress and hash value returned in the thread const {progress independently of the project outside the this.worker=new worker ('/ hash.js') / / slice. Hash} = e.data this.hashProgress=Number (progress.toFixed (2)) if (hash) {resolve (hash)}})}

Hash.js

/ / independent of the project, you can separately / / introduce spark-md5self.importScripts ('spark-md5.min.js') self.onmessage = e = > {/ / accept the data passed by the main thread Start calculating const {chunks} = e.data const spark = new self.SparkMD5.ArrayBuffer () let progress = 0 let count= 0 const loadNext = index= > {const reader = new FileReader () reader.readAsArrayBuffer (chunks [index] .file) reader.onload = e = > {count + + spark.append (e.target.result) if (count==chunks.length) {/ / returns progress and hash self.postMessage ({progress:100) to the main thread Hash:spark.end ()})} else {progress + = 100/chunks.length / / returns progress self.postMessage ({progress}) loadNext (count)}} loadNext (0)} to the main thread

Time slice

Another way is to learn from the react fiber architecture, which can be used to calculate the hash value when the browser is idle, so that the rendering of the browser is linked and there will be no obvious stutters.

CalculateHashIdle (chunks) {return new Promise (resolve= > {const spark=new sparkMD5.ArrayBuffer () let count=0 const appendToSpark=async file= > {return new Promise (resolve= > {const reader=new FileReader () reader.readAsArrayBuffer (file) reader.onload=e= > {spark.append (e.target.result) resolve ( )})} const workLoop=async deadline= > {/ / when the slice is not finished and the browser has time left while (count1) {await appendToSpark (chunks [count] .file) count++ if (count

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Development

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report