In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-03-29 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Development >
Share
Shulou(Shulou.com)05/31 Report--
This article introduces the relevant knowledge of "how to upload large files and continue uploading at breakpoints in Vue+Node". In the operation of actual cases, many people will encounter such a dilemma, so let the editor lead you to learn how to deal with these situations. I hope you can read it carefully and be able to achieve something!
source code
Breakpoint continuation, multipart upload, second upload, retry mechanism
File upload is the difficulty in development, the details and core technical points in the difficulties of large file upload and breakpoint continuation.
The upload component of the element-ui framework is based on file streams by default.
Data format: form-data
Data passed: file file stream information; filename file name
After converting to base64 string through fileRead.readAsDataURL (file), compile and send it with encodeURIComponent. The data sent is processed by qs.stringify, and the request header is added with "Content-Type": "application/x-www-form-urlencoded".
Es6 file object, ajax upload, async await promise, background file storage, streaming operation and other comprehensive full-stack skills, while increasing the difficulty to large files and breakpoint resume.
In the mobile era, pictures have become the mainstream of social communication, and the era of short-view screen is definitely a big file.
Upload 8 size 1m copies of large files
When uploading a large file at the front end, use Blob.prototype.slice to slice the file, upload multiple slices concurrently, and finally send a merge request to inform the server to merge the slices.
The server receives the slice and stores it. After receiving the merge request, the server uses the stream to merge the slice into the final file.
Monitoring the progress of slice upload by native XMLHttpRequest upload.onprogress
Use the Vue calculation attribute to calculate the upload progress of the entire file based on the progress of each slice
Use spark-md5 to calculate the file hash from the contents of the file
Through hash, you can determine whether the server has uploaded the file, thus directly prompting the user to upload the file successfully (in seconds).
Pause the upload of slices through the abort method of XMLHttpRequest
The server returns the name of the uploaded slice before upload, and the front end skips the upload of these slices.
Blob.slice
The Blob.slice () method is used to create a new Blob object that contains data within the specified byte range of the source Blob.
Return value
A new Blob object that contains data for a segment of the original Blob object.
Slice
Js has been enhanced in the es6 file object file node file stream.
Any file is binary, split blob
Start, size, offset
Http requests can be concurrent with n slices and upload faster, which improves the experience.
The slicing at the front end makes http concurrently bring the pleasure of uploading large files.
File.slice completes slicing, blob type file slicing, js binary file type blob protocol
The file can be previewed before it is uploaded to the server.
Server side
How to combine these slices into one and display the original picture
Stream flow
Readable stream, writable stream
Chunk is a binary stream file.
Promise.all to wrap each chunk write
Start end fse.createWriteStream
The process by which each chunk write creates a readable stream and then pipe it to a writable stream
Idea: use the original file as the name of the folder, and before uploading the blobs to this folder, each blob is stored as a file-index.
Http uploads large file slices concurrently
Details of uploading files with vue
Whether it is the front-end or back-end, transfer files, especially large files, it is possible to lose files, network speed, server timeout
How to avoid loss?
Hash, the file name is not unique. The content of the picture with different names is the same. Hash calculation is performed on the content of the file.
The hash front end is one, one-way
Hash if you get the content at the backend.
The same
Different retransmission
How do you understand the html5 feature, localStorage...
Web Workers optimizes our front-end performance, which will take a lot of time, complex, to be calculated in a new thread.
File upload is calculated by hash. There is no problem with the file.
Which features of es6 and how do you use them?
Function parameters are assigned default values
Give users quick perception, user experience is the core
Concurrent http front and back end experience
Breakpoint continuation
? Upload hash abort recovery
Initialize the contents of the file yarn init-yyarn add-g live-server// web http lastModified: 1644549553742lastModifiedDate: Fri Feb 11 20xx 11:19:13 GMT+0800 (China Standard time) {} name: "banner.png" size: 138424type: "image/png" webkitRelativePath: "" jyarn add multiparty// form file upload $vue--version@vue/cli 4.5.13vue create vue-upload-big-file$ vue create vue-upload-big-file? Please pick a preset: (Use arrow keys)? Please pick a preset: Manually select features? Check the features needed for your project: (Press to select, to t? Check the features needed for your project: Choose Vue version, Babel? Choose a version of Vue.js that you want to start the project with (Use arrow? Choose a version of Vue.js that you want to start the project with 2.x? Where do you prefer placing config for Babel, ESLint, etc.? (Use arrow keys) > In dedicated config files? Where do you prefer placing config for Babel, ESLint, etc.? In package.json? Save this as a preset for future projects? (Ycompan N) nyarn add element-ui
When generating file slices, you need to give each slice an identity as hash. The file name + subscript is temporarily used here, so that the backend can know which slice the current slice is, which can be used for later merge slices.
Then call uploadChunks to upload all the file slices, put the file slices, slice hash, and file names into formData, then call the request function in the previous step to return a promise, and finally call Promise.all to upload all slices concurrently
Hash, the file name, is not unique.
Pictures with different names have the same content. Hash calculation for file content
Hash front end is one, one-way. Do hash calculation for the content
The content obtained by the backend is the same as hash. If it's different, it has to be retransmitted.
Web workers optimizes our front-end performance, which will take a lot of time, complex, to be calculated in a new thread, file upload to be calculated through hash, and no problem with files.
Yarn add fs-extraFormData.append ()
FormData is used to send data.
FormData.append (name, value, filename), where filename is an optional parameter and is the file name passed to the server. When a Blob or File is used as the second parameter, the default file name of the Blob object is "blob".
Upload large files
Convert large files to binary stream format
Using the property that the flow can be cut, the binary flow can be cut into multiple parts.
Assemble and split the same number of request blocks, and send requests in parallel or serial form
Then send a merge message to the server
Breakpoint continuation
Add a different identity to each file cutting block, hash
When the upload is successful, record the logo of the successful upload
When we pause or fail to send, we can resend the cut file that did not upload successfully.
The code creates a slice createFileChunk (file, size = chunkSize) {const fileChunkList = []; var count = 0 position while (count)
< file.size) { fileChunkList.push({ file: file.slice(count, count + size) }); count += size;}return fileChunkList;} 并发及重试 // 为控制请求并发的Democonst sendRequest = (urls, max, callback) =>{let finished = 0; const total = urls.length; const handler = () = > {if (urls.length) {const url = urls.shift (); fetch (url) .then (() = > {finished++; handler ();}) .catch ((err) = > {throw Error (err);}) } if (finished > = total) {callback ();}}; / / for controls the initial concurrency for (let I = 0; I)
< max; i++) { handler(); }};const urls = Array.from({ length: 10 }, (v, k) =>K); const fetch = function (idx) {return new Promise ((resolve) = > {const timeout = parseInt (Math.random () * 1e4); console.log ('- request start'); setTimeout () = > {console.log ('- request end'); resolve (idx);}, timeout);}; const max = 4 Const callback = () = > {console.log ('all requests have been executed');}; sendRequest (urls, max, callback)
Worker processing, performance and speed will be greatly improved.
/ / generate file hash (web-worker) calculateHash (fileChunkList) {return new Promise (resolve = > {this.container.worker = new Worker ('. / hash.js'); this.container.worker.postMessage ({fileChunkList}); this.container.worker.onmessage = e = > {const {percentage, hash} = e.data If (this.tempFilesArr [fileIndex]) {this.tempFilesArr [fileIndex] .hashProgress = Number (percentage.toFixed (0));} if (hash) {resolve (hash);}};});}
Merging of files
MergeRequest (data) {const obj = {md5: data.fileHash, fileName: data.name, fileChunkNum: data.chunkList.length}; instance.post ('fileChunk/merge', obj, {timeout: 0}). Then (res) = > {this.$message.success (' upload successful');} Source methods: {handleFileChange (e) {const [file] = e.target.files If (! file) return; Object.assign (this.$data, this.$options.data ()); this.container.file = file;}, async handleUpload () {}}
XMLHttpRequest encapsulation:
Request ({url, method = "post", data, headers = {}, requestList}) {return new Promise (resolve = > {const xhr = new XMLHttpRequest (); xhr.open (method, url); Object.keys (headers) .forEach (key = > xhr.setRequestHeader (key, headers [key])); xhr.send (data) Xhr.onload = e = > {resolve ({data: e.target.response});});})
Upload slicing
Slice a file
Transfer the slice to the server
Const SIZE = 10 * 1024 * 1024; / / slice size data: () > ({container: {file: null}, data: []}), handleFileChange () {}, / / generate file slices createFileChunk (file, size = SIZE) {const fileChunkList = []; let cur = 0; while (cur)
< file.size) { fileChunkList.push({ file: file.slice(cur, cur + size) }); cur += size; } return fileChunkList;},// 上传切片async uploadChunks() { const requestList = this.data .map(({ chunk, hash }) =>{const formData = new FormData (); formData.append ("chunk", chunk); formData.append ("hash", hash); formData.append ("filename", this.container.file.name); return {formData} ) .map (async ({formData}) = > this.request ({url: "http://localhost: 3000", data: formData}); await Promise.all (requestList) / / concurrent slicing}, async handleUpload () {if (! this.container.file) return; const fileChunkList = this.createFileChunk (this.container.file); this.data = fileChunkList.map (({file}, index) = > ({chunk: file, hash: this.container.file.name +'-'+ index / / File name + array subscript})) Await this.uploadChunks ();}
Send a merge request
Await Promise.all (requestList) Async mergeRequest () {await this.reques ({url: "http://localhost:3000/merge", headers: {" content-type ":" application/json ""}) Data: JSON.stringify ({filename: this.container.file.name})}) }, async handleUpload () {}
Http module sets up the server:
Const http = require ("http"); const server = http.createServer (); server.on ("request", async (req, res) = > {res.setHeader ("Access-Control-Allow-Origin", "*"); res.setHeader ("Access-Control-Allow-Headers", "*"); if (req.method = "OPTIONS") {res.status = 200; res.end (); return;}}) Server.listen (3000, () = > console.log ("listening for port 3000")
Use multiparty package to deal with the FormData coming from the front end
In the callback of multiparty.parse, the files parameter saves the files in FormData, and the fields parameter saves the non-file fields in FormData.
Const UPLOAD_DIR = path.resolve (_ _ dirname, "..", "target"); / / large file storage directory const multipart = new multiparty.Form (); multipart.parse (req. Async (err, fields, files) = > {if (err) {return;} const [chunk] = files.chunk; const [hash] = fields.hash; const [filename] = fields.filename; const chunkDir = path.resolve (UPLOAD_DIR, filename) / / slice directory does not exist. Create slice directory if (! fse.existsSync (chunkDir)) {await fse.mkdirs (chunkDir);} / / fs-extra special method, similar to fs.rename and cross-platform / / fs-extra rename method windows platform will have permission problems await fse.move (chunk.path, `${chunkDir} / ${hash}`) Res.end ("received file chunk");})
Merge slicing
/ / after receiving the merge request sent by the front end, the server merges all slices under the folder const resolvePost = req = > new Promise (resolve = > {let chunk = ""; req.on ("data", data = > {chunk + = data;}); req.on ("end", () = > {resolve (JSON.parse (chunk) });}); const pipeStream = (path, writeStream) = > new Promise (resolve = > {const readStream = fse.createReadStream (path); readStream.on ("end", () = > {fse.unlinkSync (path); resolve ();}); readStream.pipe (writeStream);}) / / merge slices const mergeFileChunk = async (filePath, filename, size) = > {const chunkDir = path.resolve (UPLOAD_DIR, filename); const chunkPaths = await fse.readdir (chunkDir) / / sort according to slice subscript / / otherwise the acquisition order of directly reading directories may be out of order chunkPaths.sort ((AMaginb) = > a.split ("-") [1]-b.split ("-") [1]) Await Promise.all (chunkPaths.map ((chunkPath, index) = > pipeStream (path.resolve (chunkDir, chunkPath), / / create writable stream fse.createWriteStream (filePath) at specified location {start: index * size, end: (index + 1) * size})) Fse.rmdirSync (chunkDir); / / delete the directory where the slices are saved} if (req.url = ='/ merge') {const data = await resolvePost (req); const {filename, size} = data; const filePath = path.resolve (UPLOAD_DIR, `${filename}`); await mergeFileChunk (filePath, filename) Res.end (JSON.stringify ({code: 0, message: "file merged success"}))}
Create a writable stream using fs.createWriteStream. The writable stream file name is a combination of slice folder name and suffix name.
Create a readable stream of slices through fs.createReadStream, transfer and merge them into the target file
Generate hash
/ / public/hash.jsself.importScripts ("/ spark-md5.min.js"); / / Import script / / generate file hashself.onmessage = e = > {const {fileChunkList} = e.data; const spark = new self.SparkMD5.ArrayBuffer (); let percentage = 0; let count = 0; const loadNext = index = > {const reader = new FileReader (); reader.readAsArrayBuffer (fileChunkList [index] .file); reader.onload = e = > {count++ Spark.append (e.target.result); if (count = = fileChunkList.length) {self.postMessage ({percentage: 100, hash: spark.end ()}); self.close ();} else {percentage + = 100 / fileChunkList.length; self.postMessage ({percentage}) / / Recursively compute the next slice loadNext (count);}};}; loadNext (0);}; logical / / generated file hashcalculateHash (fileChunkList) for worker thread communication {return new Promise (resolve = > {/ / worker attribute this.container.worker = new Worker ('/ hash.js')) This.container.worker.postMessage ({fileChunkList}); this.container.worker.onmessage = e = > {const {percentage, hash} = e.data; this.hashPercentage = percentage; if (hash) {resolve (hash) })}
File transfer in seconds
Async verifyUpload (filename, fileHash) {const {data} = await this.request ({url: "http://localhost:3000/verify", headers: {" content-type ":" application/json "}, data: JSON.stringify ({filename) FileHash})}) Return JSON.parse (data), async handleUpload () {if (! this.container.file) return; const fileChunkList = this.createFileChunk (this.container.file); this.container.hash = await this.calculateHash (fileChunkList); const {shouldUpload} = await this.verifyUpload (this.container.file.name, this.container.hash) If (! shouldUpload) {this.$message.success ("instant upload: uploaded successfully"); return } this.data = fileChunkList.map (({file}, index) = > ({fileHash: this.container.hash, index, hash: this.container.hash + "-" + index, chunk: file, percentage: 0}); await this.uploadChunks ();}
Server:
Const extractExt = filename = > filename.slice (filename.lastIndexOf ("."), filename.length); / / extract the suffix
Suspend upload
Request ({url, method = "post", data, headers = {}, onProgress = e = > e, requestList}) {return new Promise (resolve = > {const xhr = new XMLHttpRequest (); xhr.upload.onprogress = onProgress; xhr.open (method, url) Object.keys (headers) .forEach (key = > xhr.setRequestHeader (key, headers [key])); xhr.send (data) Xhr.onload = e = > {/ / requestList saves only the xhr that is uploading slices / / removes the successfully requested xhr from the list if (requestList) {const xhrIndex = requestList.findIndex (item = > item = xhr) RequestList.splice (xhrIndex, 1);} resolve ({data: e.targt.response});} / / expose the current xhr to the external requestList?.push (xhr);})}
Pause button
HandlePause () {this.requestList.forEach (xhr = > xhr?.abort (); this.requestList = [];}
The front end sends a request for verification before each upload, and returns two results
The file already exists on the server and does not need to be uploaded again
The file does not exist on the server or some of the uploaded files are sliced. Notify the front end to upload and return the uploaded file slices to the front end.
Server verification interface
/ / return the list of uploaded slice names const createUploadedList = async fileHash = > fse.existsSync (path.resolve (UPLOAD_DIR, fileHash))? Await fse.readdir (path.resolve (UPLOAD_DIR, fileHash)): []; if (fse.existsSync (filePath)) {res.end (JSON.stringify ({shouldUpload: false}))} else {res.end (JSON.stringify ({shouldUpload: true, uploadedList: await createUploadedList (fileHash)}))}
When you click upload, check whether uploads and uploaded slices are needed.
Click the paused resume upload to return to the uploaded slice
Async handleResume () {this.status = Status.uploading; const {uploadedList} = await this.verifyUpload (this.container.file.name, this.container.hash) await this.uploadChunks (uploadedList)}, resume upload from breakpoint
The server returns and tells me where to start.
The browser handles it on its own
Cache processing
Store the uploaded slices in the successful callback of axios uploaded by slices
Before uploading slices, check whether there are uploaded slices in localstorage, and modify uploaded
When constructing slice data, filter out the ones whose uploaded is true
Garbage file cleaning
The frontend sets the cache time in localstorage, and sends a request to notify the backend to clean up the fragmented files when the time elapses. At the same time, the frontend also needs to clean the cache.
Both the front and back end agree that each cache can only be stored for 12 hours from generation, and will be cleaned automatically after 12 hours.
(time lag problem)
Second pass
Principle: calculate the HASH of the whole file, send a request to the server before performing the upload operation, pass the MD5 value, and the back end carries on the file retrieval. If the file already exists in the server, no subsequent action is taken, and the upload ends directly.
After the current file is uploaded into parts and the request merge interface is completed, proceed to the next loop. Each time you click on input, clear the data.
Q: deal with the problem that the progress bar retreats after the suspension resumes
Define a temporary variable fakeUploadProgress to store the current progress each time it is paused. After the upload is resumed, when the current progress is greater than the progress of fakeUploadProgress, you can assign it.
This is the end of the content of "how to upload large files and resume Vue+Node at breakpoints". Thank you for your reading. If you want to know more about the industry, you can follow the website, the editor will output more high-quality practical articles for you!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.