HTML5+tracking.js for brush payment
Recently, brushing payments have become so popular that bosses are sure to keep up with the times, so there's a brushing payment program.The key technologies for front-end implementation are camera video recording, photo taking and face matching. This article will discuss how to achieve face brushing payment in html5 environment and the problems encountered in the development process.
1. Camera
1.1 input Get Camera
There are two ways to get pictures on your mobile phone in html5, using input, as follows, you can turn on the camera to take pictures:
In addition, if you want to open the album, you can do this:
But both methods have compatibility issues, which may be known to students who have used them.
1.2 getUserMedia capture camera
getUserMedia is a new api for html5. The official definition is:
MediaDevices.getUserMedia() prompts the user to grant permission to use media input, which results in a MediaStream containing tracks of the requested media type.This stream may contain a video track (from hardware or virtual video sources such as cameras, video capture devices, screen sharing services, etc.), an audio track (also from hardware or virtual audio sources such as microphones, A/D converters, etc.), or other track types.
Simply put, you can get the user's camera.
As with input above, there are compatibility issues with this approach, but other approaches can be used to resolve them. You can refer to MediaDevices.getUserMedia(), which has a documentation on "Using the new API in old browsers".I have also found some references on the web here, summarizing a relatively comprehensive version of getUserMedia, coded as follows:
//Access user media devices
getUserMedia(constrains, success, error) {
if (navigator.mediaDevices.getUserMedia) { //Latest Standard API navigator.mediaDevices.getUserMedia(constrains).then(success).catch(error); } else if (navigator.webkitGetUserMedia) { //webkit Kernel Browser navigator.webkitGetUserMedia(constrains).then(success).catch(error); } else if (navigator.mozGetUserMedia) { //Firefox Browser navagator.mozGetUserMedia(constrains).then(success).catch(error); } else if (navigator.getUserMedia) { //Old API navigator.getUserMedia(constrains).then(success).catch(error); } else { this.scanTip = "Your browser does not support access to user media devices" }
}
1.3 Play Video Screen
The Get Device method has two callback functions, one successful and one failed.Start playing the video when you are successful. Playing the video screen is actually setting a url for videos and calling the play method. Setting the url here takes into account the compatibility of different browsers. The code is as follows:
success(stream) {
this.streamIns = stream // Set Play Address, webkit Kernel Browser this.URL = window.URL || window.webkitURL if ("srcObject" in this.$refs.refVideo) { this.$refs.refVideo.srcObject = stream } else { this.$refs.refVideo.src = this.URL.createObjectURL(stream) } this.$refs.refVideo.onloadedmetadata = e => { // Play Video this.$refs.refVideo.play() this.initTracker() }
},
error(e) {
this.scanTip = "Failed to access user media" + e.name + "," + e.message
}
Be careful:
The method of playing the screen is best written in the onloadmetadata callback function, otherwise errors may occur.
When playing video, it must be tested in the local environment for security reasons, that is http://localhost/xxxx Medium test, or with https://xxxxx Test in an environment, otherwise there may be cross-domain issues.
The initTracker() method used below can also be placed in this onloadedmetadata callback function, or errors will occur.
- Capture face
2.1 Use tracking.js to capture faces
video screens start recognizing faces after successful playback in videos, using a third-party feature, tracking.js, which is a JavaScript image recognition plug-in for capitalisation abroad.The key codes are as follows:
//Face capture
initTracker() {
this.context = this.$refs.refCanvas.getContext("2d") // canvas this.tracker = new tracking.ObjectTracker(['face']) // tracker instance this.tracker.setStepSize(1.7) // Set Step this.tracker.on('track', this.handleTracked) // Bind Listening Method try { tracking.track('#video', this.tracker) //Start tracking } catch (e) { this.scanTip = "Access to user media failed, please try again" }
}
After capturing a face, you can mark it with a small box on the page, which is a bit interactive.
//Track Events
handleTracked(e) {
if (e.data.length === 0) { this.scanTip = 'No face detected' } else { if (!this.tipFlag) { this.scanTip = 'Detection succeeded, taking pictures, please hold for 2 seconds' } // Take a picture after 1 second, only once if (!this.flag) { this.scanTip = 'In Photo...' this.flag = true this.removePhotoID = setTimeout(() => { this.tackPhoto() this.tipFlag = true }, 2000) } e.data.forEach(this.plot) }
}
Draw some boxes on the page to identify the face:
:style="{ width: item.width + 'px', height: item.height + 'px', left: item.left + 'px', top: item.top + 'px'}"></div>
//Draw Trace Box
plot({x, y, width: w, height: h}) {
// Create Box Object this.profile.push({ width: w, height: h, left: x, top: y })
}
2.2 Take photos
To take a picture, you use video s as the picture source and save a picture in canvas. Note that when you use the toDataURL method here, you can set the second parameter, quality, from 0 to 1, 0, which means the picture is rough, but the file is smaller, 1 means the best quality.
//Take photos
tackPhoto() {
this.context.drawImage(this.$refs.refVideo, 0, 0, this.screenSize.width, this.screenSize.height) // Save in base64 format this.imgUrl = this.saveAsPNG(this.$refs.refCanvas) // this.compare(imgUrl) this.close()
},
// Base64 to file
getBlobBydataURI(dataURI, type) {
var binary = window.atob(dataURI.split(',')[1]); var array = []; for(var i = 0; i < binary.length; i++) { array.push(binary.charCodeAt(i)); } return new Blob([new Uint8Array(array)], { type: type });
},
//Save as png,base64 format picture
saveAsPNG(c) {
return c.toDataURL('image/png', 0.3)
}
After the photo is taken, you can send the file to the back-end for comparison and verification, where the back-end uses the interface of Aliyun.
3. Final results
3.1 Reference code demo
Finally, demo is already on github, so if you're interested, you can open it and take a look.
The results are as follows:
3.2 Landing in Project
Finally, in the project, nothing more than the last step, to invoke interface comparison, according to the success or failure of the comparison results, decide whether to pay face or continue with the original password, the effect is as follows:
ps: Face matching failed here because I wore a mask and didn't show my face.The interface address of the back end calling Aliyun: https://help.aliyun.com/document_detail/154615.html?spm=a2c4g.11186623.6.625.632a37b9brzAoi
Author: Tyler Ning
Source: http://www.cnblogs.com/tylerdonet/