Demo Parsing C for Face Recognition#

Keywords: C# SDK Database github C

Summary

Whether you notice it or not, face recognition has entered the corner of life, nails have supported face punching, real-name authentication of railway stations has increased self-verification channels for face, not to mention the "smart city" and intelligent brain built by various cities. In the field of face recognition, it is usually composed of face recognition providers and face recognition application accessors. Developing face recognition technology from beginning to end requires strong specialized technical knowledge and mathematical algorithm skills. For most enterprises, it is more appropriate to choose AI company's ready-made face recognition technology engine. Solution. Rainbow Soft opened version 1.0 of Face Recognition Platform in 2017. After three years of technical iteration and updating, it has now launched version 2.2, which is offline, free and suitable for a wide range of scenarios. In order to facilitate the access of developers, HongSoft Official provides various language versions of the Demo program, because HongSoft does not provide the C # version of the SDK, so they provide the C # version of the SDK is more valuable for reference.

The download address of Hongsoft Demo is as follows: https://github.com/ArcsoftEscErd/ArcfaceDemo_CSharp_2.2 Before you start, I suggest you download it.

What is Face Recognition

Face recognition is a kind of biometric recognition technology based on human face feature information. Using cameras or cameras to collect images or video streams containing human faces, and automatically detect and track human faces in the image, and then a series of related technologies for face recognition of the detected faces, commonly known as human face recognition, face recognition.
The process of face recognition can be summarized as follows: detection of face frame - > extraction of face feature information - > retrieval of matching information in face database.

Application Scene of Face Recognition

Face recognition is mainly used for identity recognition. Due to the rapid popularization of video surveillance, many video surveillance applications urgently need a fast identification technology under the condition of long-distance and uncooperative users, in order to quickly confirm the identity of personnel in long-distance and realize intelligent early warning. Face recognition technology is undoubtedly the best choice. Fast face detection technology can be used to find faces in real-time from surveillance video images and compare them with face database in real-time, so as to achieve rapid identification.
In real life, from the most common facial access control, to real-name security, scenic spot ticket checking, company or school face check-in, unmanned supermarket and so on have a wide range of applications.

What is a biopsy test?

Biological detection, as its name implies, distinguishes biological features forged from non-living substances such as photographs, silica gel and plastics by identifying physiological information in vivo. In the face recognition application, the biopsy detection technology is used to judge whether the face images collected by the system come from the real face, so as to prevent the false face images such as photographs and videos from being input into the system to cause misjudgement. The biopsy detection is very important in the business of face recognition in unattended scenes.

Rainbow Soft Face Recognition SDK

At present, there are many face recognition technology schemes on the market. Internet connection can be divided into online and offline, local recognition and server big data recognition from access mode. Hongsoft provides an offline recognition SDK based on local algorithm features, and its basic algorithm is written in C to provide full level. Offline support of the platform.

Rainbow Soft Vision Open Platform

Rainbow Soft Face Recognition SDK is provided by visual open platform, which includes the most commonly used functional components in face recognition scenarios, such as face detection, face recognition, age and gender detection, biopsy detection, etc. Face detection optimizes the algorithm for static and dynamic detection scenarios respectively, and derives the gender and year from it. Age detection expands the use of face recognition scenarios, and biopsy detection components can effectively ensure the security of face recognition applications.
Visit https://ai.arcsoft.com.cn/third/mobile.html?segmentfault According to the prompt of the website, you can register users and download SDK packages.

Brief Introduction to Hongruan's Face Recognition Demo

Hongsoft SDK, unlike a large number of Restful-style interfaces, does not use the common HTTP-based approach, nor does it provide SDK packages in C# language. It only provides SDK in C language. It has some difficulties for C# access. At the beginning of its release, many gods wrote their own access Demo programs. Later, Hongsoft official also issued Dem. From the first version in January 2018 to the 2.2 version updated with SDK, the code structure and annotations are clearer.

Demo Effect Show

Demo is the standard C WINFORM engineering style, through GitHub, after downloading, you can directly use VS to open.
After opening, there is a readme.md file, which is very important. Please take a close look before you start. Here's a summary of the main points.

  1. Register and login to the HongSoft Developer account, download the SDK of ArcFace Win32/Win64, and recommend downloading version 2.2.
  2. Fill in the corresponding location in the app.config file with the APPID and KEY generated at download time.
  3. Unzip the downloaded files and extract the dll under the directory of the corresponding platform according to this platform

If the above steps are OK, the program can run normally. If there are problems in the middle, you can refer to the contents of readme for checking.

Usually after OK, the system pops up the window of normal operation, and searches for several star photos online for registration and comparison.
As shown in the following figure:

As you can see, Hongsoft Demo can recognize face information correctly.

Demo also provides biopsy detection. If your machine does not have a camera, you can plug in a USB camera. Click on the camera to turn it on.

If we use our own face recognition, we will show RGB living body. If we try to recognize it with photos or videos, we will show RGB prosthesis.

Demo Code Analysis for Face Recognition

Next, let's get to the main topic. Let's open the engineering view and analyze the code structure and the main process of HongSoft Face Recognition Demo from the code point of view.

From the above figure, we can see that the modern code structure is still very clear.

Catalog Explain
Entity Used to place some entity classes
lib The third library is mainly used to obtain the content of video frames.
SDKModels The field model class of SDK mainly interacts with SDK and needs no attention when used in general.
SDKUtils For C# encapsulation of SDK functions, it is recommended to use secondary encapsulation classes in Utils.
Utils Some tool classes are provided that make complex SDK operations simple and can be used directly in projects.

All interface functions are in FaceForm.cs. Let's open the code view. The code structure of each area of the code is clear. Let's take a look at the functions of the main part.

Definition of parameters

The parameter definition part mainly defines some parameters with corresponding annotations. We need to focus on the size and similarity of the image.

private long maxSize = 1024 * 1024 * 2;

This parameter defines the maximum image size that can be identified and can be adjusted as needed.

private float threshold = 0.8f;

This parameter defines confidence, that is, when similarity is reached, we think of a person as a person.

Engine Initialization

InitEngines(), an important method of initialization, is used to initialize the face recognition engine.

This part of the code first obtains the information of the configuration file, then reads the information, and carries on the engine activation operation, if there is an error, then pops up the prompt information.

It should be noted here that since C supports multi-CPU architecture, the corresponding DLLs of 32-bit and 64-bit versions of Hongsoft SDK are not the same, so we need to decide which mode we are running in.

var is64CPU = Environment.Is64BitProcess;

After judging the CPU, try to load the corresponding DLL and call the activation process.

      int retCode = 0;
        try
        {
            retCode = ASFFunctions.ASFActivation(appId, is64CPU ? sdkKey64 : sdkKey32);
        }
        catch (Exception ex)
        {
            //Disable related function buttons
            ControlsEnable(false, chooseMultiImgBtn, matchBtn, btnClearFaceList, chooseImgBtn);
            if (ex.Message.Contains("Unable to load DLL"))
            {
                MessageBox.Show("Please send sdk Relevant DLL Put in bin Corresponding x86 or x64 In the folder below!");
            }
            else
            {
                MessageBox.Show("Failure to activate engine!");
            }
            return;
        }

Rainbow Soft SDK needs to be activated before it can be used. When activated, you must ensure that your device can connect to the Internet. Failure to connect causes activation to fail.

Next, we configure the engine's functionality, and in most cases, we keep the default configuration. If you need to adjust, you can focus on the following parameters

  //Face in the image proportion, if you need to adjust the size of the face detection, please modify this value, the effective value is 2-32.
   int detectFaceScaleVal = 16;
  //Maximum number of faces to be detected
   int detectFaceMaxNum = 5;

DetectiFaceScaleVal is the ratio of a face to an image. Simply put, it is the ratio of a face to an image. The larger the value, the smaller the face that can be detected. Detection FaceMaxNum is the largest number of faces detected. The more faces detected, the more memory the program needs.

The next parameter, combinedMask, defines the engine's capabilities, and suggests that the engine be fully open by default. If performance is required, only the necessary functions can be turned on.

//The combination of detection functions that need to be initialized in engine initialization
int combinedMask = FaceEngineMask.ASF_FACE_DETECT | FaceEngineMask.ASF_FACERECOGNITION | FaceEngineMask.ASF_AGE | FaceEngineMask.ASF_GENDER | FaceEngineMask.ASF_FACE3DANGLE;

Initialize the engine by calling ASFFunctions.ASFInitEngine

      retCode = ASFFunctions.ASFInitEngine(detectMode, imageDetectFaceOrientPriority,
 detectFaceScaleVal, detectFaceMaxNum, combinedMask, ref pImageEngine);

When the retCode return value is 0, the delegate initialization succeeds.

According to the same method, other engines are initialized, including FR engine for face detection, FR engine for RGB, and RGB engine for IR. They have different parameters. In practice, we can fine-tune them according to need.

Other similar operations can be seen on this page. Because Hongsoft Demo has encapsulated the operations in detail, the code displayed in FaceForm.cs is some code that interacts with the controls. It is not significant to analyze them in detail. Next, we will analyze some more detailed code, that is, hidden in the FaceUtil class. The realization of some functions.

Detecting Face Information

There are two ways to detect face information. One is to detect from photos and the other is to detect from videos.

    public static ASF_MultiFaceInfo DetectFace(IntPtr pEngine, Image image)
    {
        lock (locks)
        {
            ASF_MultiFaceInfo multiFaceInfo = new ASF_MultiFaceInfo();
            if (image != null)
            {
            /*If the photo size is too large, zoom and align it*/
                if (image.Width > 1536 || image.Height > 1536)
                {
                    image = ImageUtil.ScaleImage(image, 1536, 1536);
                }
                else
                {
                /*If the photo size is normal, align it directly*/
                    image = ImageUtil.ScaleImage(image, image.Width, image.Height);
                }
                if(image == null)
                {
                    return multiFaceInfo;
                }
                /*Converting to SDK-specific format requires manual memory release*/
                ImageInfo imageInfo = ImageUtil.ReadBMP(image);
                if(imageInfo == null)
                {
                    return multiFaceInfo;
                }
                /*Call Engine*/
                multiFaceInfo = DetectFace(pEngine, imageInfo);
                /*Release the memory occupied by the image*/
                MemoryUtil.Free(imageInfo.imgData);
                return multiFaceInfo;
            }
            else
            {
                return multiFaceInfo;
            }
        }
    }

Note the two important ScaleImage and ReadBMP methods in the above code. The ScaleImage method is to process pictures into the format recommended by the Rainbow Soft Face Engine, requiring an integral multiple of the width of the picture.

    public static ImageInfo ReadBMP(Image image)
    {
        ImageInfo imageInfo = new ImageInfo();
        Image<Bgr, byte> my_Image = null;
        try
        {
            //Gray Level Conversion of Image
            my_Image = new Image<Bgr, byte>(new Bitmap(image));
            imageInfo.format = ASF_ImagePixelFormat.ASVL_PAF_RGB24_B8G8R8;
            imageInfo.width = my_Image.Width;
            imageInfo.height = my_Image.Height;
            imageInfo.imgData = MemoryUtil.Malloc(my_Image.Bytes.Length);
            MemoryUtil.Copy(my_Image.Bytes, 0, imageInfo.imgData, my_Image.Bytes.Length);
            return imageInfo;
        }
        catch (Exception ex)
        {
            Console.WriteLine(ex.Message);
        }
        finally
        {
            if (my_Image != null)
            {
                my_Image.Dispose();
            }
        }
        return null;
    }

Note here that this method calls the MemoryUtil.Malloc method to allocate unmanaged memory, and later calls the MemoryUtil.Free() method to free memory.

The result is returned to ASF_MultiFaceInfo structure, where faceRects is the result set of faces and faceNum is the number of faces. The location information of face recognition can be obtained by the following code.

MRECT rect = MemoryUtil.PtrToStructure<MRECT>(multiFaceInfo.faceRects);

Gender and Age Testing

The FaceUtil class also provides methods for year collar detection and performance detection. AgeEstimation and GenderEstimation, the basic mode of operation is to apply for memory first, then call the corresponding method of native Native, and then release the memory process.

public static ASF_AgeInfo AgeEstimation(IntPtr pEngine, ImageInfo imageInfo, ASF_MultiFaceInfo multiFaceInfo, out int retCode)
{
    retCode = -1;
    IntPtr pMultiFaceInfo = MemoryUtil.Malloc(MemoryUtil.SizeOf<ASF_MultiFaceInfo>());
    MemoryUtil.StructureToPtr(multiFaceInfo, pMultiFaceInfo);
    if (multiFaceInfo.faceNum == 0)
    {
        return new ASF_AgeInfo();
    }
    //Face Information Processing
    retCode = ASFFunctions.ASFProcess(pEngine, imageInfo.width, imageInfo.height, imageInfo.format, imageInfo.imgData, pMultiFaceInfo, FaceEngineMask.ASF_AGE);
    if (retCode == 0)
    {
        //Getting Age Information
        IntPtr pAgeInfo = MemoryUtil.Malloc(MemoryUtil.SizeOf<ASF_AgeInfo>());
        retCode = ASFFunctions.ASFGetAge(pEngine, pAgeInfo);
        Console.WriteLine("Get Age Result:" + retCode);
        ASF_AgeInfo ageInfo = MemoryUtil.PtrToStructure<ASF_AgeInfo>(pAgeInfo);
        //Release memory
        MemoryUtil.Free(pMultiFaceInfo);
        MemoryUtil.Free(pAgeInfo);
        return ageInfo;
    }
    else
    {
        return new ASF_AgeInfo();
    }
}

It should be noted that in order to use gender and age detection, the corresponding function must be turned on when the SDK is initialized, that is to say, the combinedMask value must include FFace EngineMask. ASF_AGE | FaceEngineMask. ASF_GENDER;

Getting feature information from photos

After obtaining the face frame in the previous step, we can call the face recognition engine to get the face feature information, and pass the photo information into the face recognition engine to return the face model information.

IntPtr pFaceModel = ExtractFeature(pEngine, imageInfo, multiFaceInfo, out singleFaceInfo);

Let's take a look at the ExtractFeature method. Demo here is more complex, and several methods are all methods of the same name. Let's analyze them in detail.

First find the IntPtr Extract Feature (IntPtr pEngine, Image, out ASF_SingleFaceInfo singleFaceInfo) method.

Because the first step of face recognition is to detect the position of the face frame first, this method is to pre-process and analyze the incoming image, and call the method of face detection to detect the face.

.... Other codes mainly analyze the incoming picture, convert the size, and return the empty feature directly if it is empty or the picture is illegal.
ASF_MultiFaceInfo multiFaceInfo = DetectFace(pEngine, imageInfo);
singleFaceInfo = new ASF_SingleFaceInfo();
IntPtr pFaceModel = ExtractFeature(pEngine, imageInfo, multiFaceInfo, out singleFaceInfo);
return pFaceModel;

Let's look at the IntPtr Extract Feature (IntPtr pEngine, ImageInfo imageInfo, ASF_MultiFaceInfo multiFaceInfo, out ASF_SingleFaceInfo singleFaceInfo) method in the order of invocation.

    public static IntPtr ExtractFeature(IntPtr pEngine, ImageInfo imageInfo, ASF_MultiFaceInfo multiFaceInfo, out ASF_SingleFaceInfo singleFaceInfo)
    {
        /*Define a single face information structure to be returned*/
        singleFaceInfo = new ASF_SingleFaceInfo();
        /*If there is no face box, return the empty feature directly*/
        if (multiFaceInfo.faceRects == null)
        {
            ASF_FaceFeature emptyFeature = new ASF_FaceFeature();
            IntPtr pEmptyFeature = MemoryUtil.Malloc(MemoryUtil.SizeOf<ASF_FaceFeature>());
            MemoryUtil.StructureToPtr(emptyFeature, pEmptyFeature);
            return pEmptyFeature;
        }
        /*Assigning Face Box and Face Angle in FaceDetect to out Object*/            
        singleFaceInfo.faceRect = MemoryUtil.PtrToStructure<MRECT>(multiFaceInfo.faceRects);
        singleFaceInfo.faceOrient = MemoryUtil.PtrToStructure<int>(multiFaceInfo.faceOrients);
        /*Converting a single face object into an unmanaged structure*/            
        IntPtr pSingleFaceInfo = MemoryUtil.Malloc(MemoryUtil.SizeOf<ASF_SingleFaceInfo>());
        MemoryUtil.StructureToPtr(singleFaceInfo, pSingleFaceInfo);
        IntPtr pFaceFeature = MemoryUtil.Malloc(MemoryUtil.SizeOf<ASF_FaceFeature>());
        /*Calling Face Recognition Interface to Extract Face Features*/
        int retCode = ASFFunctions.ASFFaceFeatureExtract(pEngine,
            imageInfo.width, imageInfo.height, imageInfo.format,
            imageInfo.imgData,
            pSingleFaceInfo, pFaceFeature);
        Console.WriteLine("FR Extract Feature result:" + retCode);
        if (retCode != 0)
        {
            /*Exception handling, note: Since unmanaged objects are used, memory needs to be freed*/
            MemoryUtil.Free(pSingleFaceInfo);
            MemoryUtil.Free(pFaceFeature);
            ASF_FaceFeature emptyFeature = new ASF_FaceFeature();
            IntPtr pEmptyFeature = MemoryUtil.Malloc(MemoryUtil.SizeOf<ASF_FaceFeature>());
            MemoryUtil.StructureToPtr(emptyFeature, pEmptyFeature);
            return pEmptyFeature;
        }
        //Processing return values, here's a bunch of interoperable access
        ASF_FaceFeature faceFeature = MemoryUtil.PtrToStructure<ASF_FaceFeature>(pFaceFeature);
        byte[] feature = new byte[faceFeature.featureSize];
        MemoryUtil.Copy(faceFeature.feature, feature, 0, faceFeature.featureSize);
        ASF_FaceFeature localFeature = new ASF_FaceFeature();
        localFeature.feature = MemoryUtil.Malloc(feature.Length);
        MemoryUtil.Copy(feature, 0, localFeature.feature, feature.Length);
        localFeature.featureSize = feature.Length;
        IntPtr pLocalFeature = MemoryUtil.Malloc(MemoryUtil.SizeOf<ASF_FaceFeature>());
        MemoryUtil.StructureToPtr(localFeature, pLocalFeature);
        //Finally, don't forget to free memory
        MemoryUtil.Free(pSingleFaceInfo);
        MemoryUtil.Free(pFaceFeature);
        /*Return extracted facial feature data*/
        return pLocalFeature;
    }

Face Retrieval

In face retrieval, the local face material database should be established first. The face feature extracted in the previous step is a series of binary data. In practical use, we can store the feature in the database or local file. Demo is directly placed in the imagesFeatureList variable for demonstration convenience.

When facial retrieval is carried out, the facial features to be retrieved are obtained, and then the retrieval can be completed by calling the ASFFunctions. ASFFace Feature Compare method.

 for (int i = 0; i < imagesFeatureList.Count; i++)
        {
            IntPtr feature = imagesFeatureList[i];
            float similarity = 0f;
            int ret = ASFFunctions.ASFFaceFeatureCompare(pImageEngine, image1Feature, feature, ref similarity);
            //Added outlier processing
            if(similarity.ToString().IndexOf("E") > -1)
            {
                similarity = 0f;
            }
            AppendText(string.Format("and{0}Number comparison results:{1}\r\n", i, similarity));
            imageList.Items[i].Text = string.Format("{0}Number({1})", i, similarity);
            if (similarity > compareSimilarity)
            {
                compareSimilarity = similarity;
                compareNum = i;
            }
        }

The ASFFunctions.ASFFaceFeatureCompare method actually calls the corresponding method of SDK, whose return value is simiarity.
Demo compares the acquired face with all the faces in the face database to find the closest feature.
In practical applications, if we find a feature that meets our confidence requirements, we can exit the cycle directly.

Tip: In practical applications, if the base of the face database is large, you can open multiple FR instances for retrieval, you can also enable gender and age data in face detection to narrow the scope of the query.

Face Detection from Video

If face detection from photos is the basis of face recognition, then face recognition from video is the most practical application. The real-time face detection system is based on video detection, and biopsy detection is based on face detection in video mode.
Simply speaking, the way to detect human faces from video is to capture frames containing human faces from video and analyze the process of recognition. This process is described in detail in FaceForm's videoSource_Paint method.

Grab a frame from the camera (RGB camera)
Call the face detection engine for video to detect face
 Draw a face frame according to the position of the returned face, and identify the recognized face information using the detection results of the previous frame.
Call the biopsy function according to the returned face information
 Using Face Recognition Engine to Extract Face Features when Judgment is Living
 Matching Face Features with Face Database Information
 Record results
 Waiting to capture the next frame

The first thing to note is pVideoEngine

ASF_MultiFaceInfo multiFaceInfo = FaceUtil.DetectFace(pVideoEngine, bitmap);
this pVideoEngine First of all, it's a face detection engine in video mode. It's in the InitEngines()Method enabled

  uint detectModeVideo = DetectionMode.ASF_DETECT_MODE_VIDEO;
        int combinedMaskVideo = FaceEngineMask.ASF_FACE_DETECT | 
        FaceEngineMask.ASF_FACERECOGNITION;
        retCode = ASFFunctions.ASFInitEngine(detectModeVideo,
        videoDetectFaceOrientPriority, detectFaceScaleVal, 
        detectFaceMaxNum, combinedMaskVideo, ref pVideoEngine);

What we call here is pVideoEngine, which is different from pImage Engine based on pictures. The difference between the two engines is that pVideoEngine uses video mode when it is initialized. In the scene of video detection, it is recommended to use video mode. In the case of image processing, it is recommended to use image mode.
Because video (camera) generates 25-30 frames per second of data, and because of the difference in algorithm implementation, only about 20 times per second of image mode detection can be done, so in video case, using image mode to detect human face is not suitable because of insufficient computational power; but the video mode can run 100 times per second. Secondly, in version 2.2, the video mode also adds the output of TrackID parameter, which makes it easier to judge the same person. In the case of single image detection, image mode is generally used for face detection. The detection of image mode is more meticulous, and the support for multiple faces and large images is better. Because of single image detection, the performance of image mode does not have problems. In general, 5-10 images in one second can meet the product requirements.

Here we need to pay attention to one point, Demo also pointed out that we must ensure that only one frame at the same time, can not detect multiple frames at the same time, in order to avoid the page in the display of carton. In addition, extracting eigenvalues and comparing information are time-consuming, and additional threads are needed to avoid the carton of the main thread interface.

Detection of living organisms

Ways and Methods of Living Access

At present, there are two ways of living recognition in common use: interactive living and non interactive. When we log on to Alipay, Zhang Zhang mouth and shaking head are all interactive. If there is no need for other actions, it is non interactive. The SDK of the rainbow provides RGB cameras based on different cameras. Two non-interactive algorithms based on infrared depth camera.

RGB in vivo

Only a single RGB camera can complete the hardware installation at low cost, silent recognition does not need user action coordination, just need a common camera. High degree of humanization and extensive application scenarios

In vivo IR

Through infrared imaging principle (screen can not be imaged, different reflectivity of different materials, etc.) and deep learning algorithm, high robustness in vivo judgment and silent recognition can be achieved, which can effectively defend against image, video, screen, mask and other attacks, and can meet the needs of biometric detection applications of binocular face recognition terminal products.

A simple judgement is that if your camera has only one color lens and no IR infrared lens, you can use RGB living body. If you have a binocular camera and one is infrared, you can use IR living body. In terms of credibility, IR living body has more credibility, but needs special settings. Prepare.

In the version before 2.1, Hongsoft SDK only provided RGB biopsy function. If you need to use IR biopsy function, you need to use version 2.2 SDK.

Analysis of RGB in vivo interface

Rainbow Soft's biopsy is built into FR engine. To use the biopsy function, it must be enabled first. In Demo's InitEngines() method, we can see the initialization method of FR engine.

Initialization

Depending on the camera, different biopsy engines are enabled. The following code enables the FR engine in RGB mode

//Special FR Engine for RGB Video
        detectFaceMaxNum = 1;
        combinedMask = FaceEngineMask.ASF_FACE_DETECT | FaceEngineMask.ASF_FACERECOGNITION | FaceEngineMask.ASF_LIVENESS;
        retCode = ASFFunctions.ASFInitEngine(detectMode, imageDetectFaceOrientPriority, detectFaceScaleVal, detectFaceMaxNum, combinedMask, ref pVideoRGBImageEngine);

Among them, FaceEngineMask.ASF_LIVENESS is a normal RGB living body, and if it is infrared dual-emission, it is FaceEngineMask.ASF_IR_LIVENESS.

Detection of living organisms

The best time for face detection is to grab the face frame before analyzing the face features. At this time, we only need to call the LivenessInfo_RGB method of faceUtil. Whether isLive in its returned liveInfo is a judgment of whether it is living or not.
LivenessInfo_RGB calls SDK's ASFProcess method internally

retCode = ASFFunctions.ASFProcess(pEngine, imageInfo.width, imageInfo.height, imageInfo.format, imageInfo.imgData, pMultiFaceInfo, FaceEngineMask.ASF_LIVENESS);
 if (retCode == 0)
            {
                //Obtain biopsy results
                IntPtr pLivenessInfo = MemoryUtil.Malloc(MemoryUtil.SizeOf<ASF_LivenessInfo>());
                retCode = ASFFunctions.ASFGetLivenessScore(pEngine, pLivenessInfo);
                Console.WriteLine("Get Liveness Result:" + retCode);
                ASF_LivenessInfo livenessInfo = MemoryUtil.PtrToStructure<ASF_LivenessInfo>(pLivenessInfo);
                //Release memory
                MemoryUtil.Free(pMultiFaceInfo);
                MemoryUtil.Free(pLivenessInfo);
                return livenessInfo;
            }

Its return value isLive in livenessInfo defines the result of face living body. When it is 1, it is - 1, it is prosthesis. When the program judges the prosthesis, it will no longer perform feature extraction and matching.

Analysis of IR in vivo interface

To be added...

Sharing of Problems and Solutions

Management of Unmanaged Memory and Memory Overflow

In C# programs, we often deal with managed memory and often use new to create new objects. However, the SDK provided by Hongsoft is based on native code and written in C language. It needs to allocate and use unmanaged memory when it is used, and its parameters also use the structure type of C. For convenience, Demo program provides MemoryUtil class, which provides the convenience of calling C method directly by encapsulating the corresponding methods of Marshal l class. Easy to use.

When writing your own program in Demo code, it is important to note that some FaceUtils methods call the Malloc method to allocate memory, but instead of releasing memory, they release it in other methods. It is important to have an unmanaged memory management principle that calls to Marshal.AllocHGlobal must call Marshal.FreeHGlobal(ptr) manually. Release memory, even if GC.Collect() is called; methods cannot be freed, resulting in memory leaks.

Marshal class is the most important class to be used in. net interoperable access. It provides a set of methods for allocating unmanaged memory, copying unmanaged memory blocks, converting managed type to unmanaged type, and other miscellaneous methods for interacting with unmanaged code. . Specifically, you can refer to the MSDN documentation. https://docs.microsoft.com/zh...

dll not found

Different versions of VS have certain requirements for the placement of DLL. The CPU type of the selected program will also affect the use of the final DLL. Generally speaking, if 32-bit program is used, it will be placed in the x86 folder, and if it is x64, it will be placed in the 64 folder.

Activation failure

SDK version 2.2 needs to be activated automatically when it is first used, so please connect to the Internet when it is first used.
If the 90118 device does not match when the program is started, the hardware information usually changes. At this time, only the ArcFace32.dat or ArcFace64.dat files under the SDK directory need to be deleted, and the SDK will automatically activate the network if the file is not detected.

Can the code in Demo be used in WPF or Asp.net?

This is certainly possible. We can transform the official Demo into an asp.net application or a WPF application according to our business logic.
Demo has encapsulated most of its functions, and once you understand the business logic, you can use the methods in FaceUtil directly. However, when using WPF or asp.net, you may encounter a lack of stack memory, because. net defaults to a stack size of 256K or less, while SDK needs to use more than 512KB. Just adjust the stack size when new threads are created. See the following method

new Thread(new ThreadStart(delegate {
        ASF_MultiFaceInfo multiFaceInfo = FaceUtil.DetectFace(pEngine, imageInfo);
    }), 1024 * 512).Start();

More questions and support

Hongsoft Open Platform Forum provides an official information exchange platform, which can be accessed. https://ai.arcsoft.com.cn/bbs/index.php For more information, there are technicians squatting to solve your problems. If you have a good Demo to share with other small partners, you can also upload your work in the forum.

Posted by tomc_1 on Mon, 19 Aug 2019 02:21:07 -0700