In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-02-26 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Development >
Share
Shulou(Shulou.com)06/01 Report--
Editor to share with you how to use HTML5+tracking.js to achieve facial scanning payment function, I believe that most people do not know much about it, so share this article for your reference, I hope you can learn a lot after reading this article, let's go to know it!
1. Camera 1.1input get camera
There are two ways to get a user's camera in html5. Use input, as follows:
In addition, if you want to open a photo album, you can do this:
But both methods will have compatibility problems, which may be known to students who have used them.
1.2getUserMedia acquires camera image
GetUserMedia is a new api for html5, which is officially defined as:
MediaDevices.getUserMedia () prompts the user for permission to use the media input, which produces a MediaStream that contains the track of the requested media type. This stream can include a video track (from hardware or virtual video sources, such as cameras, video capture devices, screen sharing services, etc.), an audio track (also from hardware or virtual audio sources, such as microphones, Amax D converters, etc.), or other track types.
To put it simply, you can get the user's camera.
Like input above, this approach also has compatibility issues, but it can be solved in other ways, see MediaDevices.getUserMedia (), which describes "using the new API in the old browser" in the document. I also found some references on the Internet here, and summed up a relatively comprehensive version of getUserMedia, the code is as follows:
/ / visit the user media device getUserMedia (constrains, success, error) {if (navigator.mediaDevices.getUserMedia) {/ / the latest standard API navigator.mediaDevices.getUserMedia (constrains) .then (success) .catch (error);} else if (navigator.webkitGetUserMedia) {/ / webkit kernel browser navigator.webkitGetUserMedia (constrains) .then (success) .catch (error) } else if (navigator.mozGetUserMedia) {/ / Firefox browser navagator.mozGetUserMedia (constrains) .then (success) .catch (error);} else if (navigator.getUserMedia) {/ / legacy API navigator.getUserMedia (constrains) .then (success) .catch (error);} else {this.scanTip = "your browser does not support access to user media devices"}}
1.3 play the video
The method to get the device has two callback functions, one is success and the other is failure. Start playing the video when it is successful. To play the video is to set a url for video and call the play method. Here, you need to consider the compatibility of different browsers when setting url. The code is as follows:
Success (stream) {this.streamIns = stream / / set playback address Webkit kernel browser this.URL = window.URL | | window.webkitURL if ("srcObject" in this.$refs.refVideo) {this.$refs.refVideo.srcObject = stream} else {this.$refs.refVideo.src = this.URL.createObjectURL (stream)} this.$refs.refVideo.onloadedmetadata = e = > {/ / play video this.$refs.refVideo.play () this.initTracker ()}} Error (e) {this.scanTip = "failed to access user media" + e.name + "," + e.message}
Note:
The video playback method is best written in the onloadmetadata callback function, otherwise an error may be reported.
When playing a video for security reasons, it must be tested in the local environment, that is, in http://localhost/xxxx, or in an environment with https://xxxxx, otherwise there may be cross-domain problems.
The initTracker () method used below can also be placed in this onloadedmetadata callback function, otherwise an error will be reported.
two。 Capture a human face
2.1 capture faces using tracking.js
After the video screen is successfully played in video, it begins to recognize faces. Here we use a third-party function, tracking.js, which is a JavaScript image recognition plug-in written by a foreign god. The key code is as follows:
/ / face capture initTracker () {this.context = this.$refs.refCanvas.getContext ("2d") / / canvas this.tracker = new tracking.ObjectTracker (['face']) / / tracker instance this.tracker.setStepSize (1.7) / / set step size this.tracker.on (' track' This.handleTracked) / / bind listening method try {tracking.track ('# video', this.tracker) / / start tracking} catch (e) {this.scanTip = "failed to access user media Please try again "}}"
After capturing the face, you can mark it with a small box on the page, which has some interactive effect.
/ / tracking event handleTracked (e) {if (e.data.length = 0) {this.scanTip ='No face detected'} else {if (! this.tipFlag) {this.scanTip = 'successfully detected, please hold still for 2 seconds'} / / take a picture after 1 second Take only one shot of if (! this.flag) {this.scanTip = 'taking a picture..' This.flag = true this.removePhotoID = setTimeout (() = > {this.tackPhoto () this.tipFlag = true}, 2000)} e.data.forEach (this.plot)}}
Draw some boxes on the page to identify the face:
/ draw the tracking box plot ({x, y, width: W, height: h}) {/ / create the box object this.profile.push ({width: W, height: h, left: X, top: y})}
2.2 taking pictures
To take pictures, use video as the image source and save a picture in canvas. Note that when you use the toDataURL method here, you can set the second parameter quality. From 0 to 1, 0 indicates that the picture is rough, but the file is relatively small, and 1 means the best quality.
/ / tackPhoto () {this.context.drawImage (this.$refs.refVideo, 0,0, this.screenSize.width, this.screenSize.height) / / Save in base64 format this.imgUrl = this.saveAsPNG (this.$refs.refCanvas) / / this.compare (imgUrl) this.close ()}, / / Base64 transfer file getBlobBydataURI (dataURI, type) {var binary = window.atob (dataURI.split (',') [1]); var array = [] For (var I = 0; I < binary.length; iTunes +) {array.push (binary.charCodeAt (I));} return new Blob ([new Uint8Array (array)], {type: type});}, / / Save as png,base64 format saveAsPNG (c) {return c.toDataURL ('image/png', 0.3)}
After the photo is taken, you can send the file to the backend for comparison and verification. Here, the backend uses Aliyun's interface.
It is nothing more than the last step to call the interface to compare, and decide whether to pay by face or continue to use the original password according to whether the result of the comparison is successful or failed.
The above is all the contents of the article "how to use HTML5+tracking.js to achieve facial scanning payment function". Thank you for reading! I believe we all have a certain understanding, hope to share the content to help you, if you want to learn more knowledge, welcome to follow the industry information channel!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.