Python Public Number Development-Pigment Detection

Keywords: encoding JSON xml Python

Links to the original text: https://www.cnblogs.com/injet/p/9191993.html

In the last article, we connected our program to the Wechat Public Number and returned the text and picture files sent by users as they are. Today, we analyze the user's pictures through Tencent's AI platform and then return them to the user.

Design sketch



I. Accessing Tencent AI Platform

Let's first look at the description of the official face detection and analysis interface:

Detecting the location and corresponding facial attributes of all faces in a given Image. Location includes (x, y, w, h), facial attributes include gender, age, expression, beauty, glass es and posture (pitch, roll, yaw).

Request parameters include the following:

  • app_id Application ID, which we can get after registering with AI platform
  • app_id time_stamp timestamp
  • nonce_str Random String
  • sign signature information needs to be computed by ourselves
  • image needs to be detected (upper limit 1M)
  • Mode detection mode

1. Interface Authentication, Constructing Request Parameters
Officials have given us the calculation method of interface authentication.

The < key, value > request parameters are sorted by key in ascending order of dictionary, and the ordered parameter pair list N is obtained.
The parameter pairs in list N are spliced into strings according to the format of key-value pairs of URLs, and the string T (e.g. key1 = value1 & key2 = value2) is obtained. The value part of the key-value splicing process of URLs requires URL encoding. The URL encoding algorithm uses uppercase letters, such as% E8, rather than lowercase% e8.
Using app_key as the key name, the application key is spliced to the end of the string T to get the string S (e.g. key1 = value1 & key2 = Value2 & app_key = key).
MD5 operation is performed on string S, all characters of MD5 value are converted to uppercase, and the interface request signature is obtained.
2. Request interface address
Request interface information, we use requests to send requests, will be returned in json format image information pip install requests installation requests.

3. Processing the returned information
Processing the returned information, displaying the information on the image, and then saving the processed image. Here we use opencv, and pillow libraries pip install pillow and pip install opencv-python to install.

Starting with coding, we created a new face_id.py file to dock with AI platform, and returned the detected image data.

import time
import random
import base64
import hashlib
import requests
from urllib.parse import urlencode
import cv2
import numpy as np
from PIL import Image, ImageDraw, ImageFont
import os

'''
//What problems do you not understand? Python Learning Exchange Group: 821460695 to meet your needs, information has been uploaded group files, you can download!
'''
# 1. Calculate interface authentication and construct request parameters

def random_str():
    '''Get random strings nonce_str'''
    str = 'abcdefghijklmnopqrstuvwxyz'
    r = ''
    for i in range(15):
        index = random.randint(0,25)
        r += str[index]
    return r


def image(name):
    with open(name, 'rb') as f:
        content = f.read()
    return base64.b64encode(content)


def get_params(img):
    '''Organize the parameter form of the interface request and calculate it sign Interface authentication information,
    //Finally, the parameter dictionary'''required for the interface request is returned.
    params = {
        'app_id': '1106860829',
        'time_stamp': str(int(time.time())),
        'nonce_str': random_str(),
        'image': img,
        'mode': '0'

    }

    sort_dict = sorted(params.items(), key=lambda item: item[0], reverse=False)  # sort
    sort_dict.append(('app_key', 'P8Gt8nxi6k8vLKbS'))  # Add app_key
    rawtext = urlencode(sort_dict).encode()  # URL encoding
    sha = hashlib.md5()
    sha.update(rawtext)
    md5text = sha.hexdigest().upper()  # Calculate sign, interface authentication
    params['sign'] = md5text  # Add to the request parameter list
    return params

# Request interface URL


def access_api(img):
    print(img)
    frame = cv2.imread(img)
    nparry_encode = cv2.imencode('.jpg', frame)[1]
    data_encode = np.array(nparry_encode)
    img_encode = base64.b64encode(data_encode)  # Pictures converted to base64 encoding format
    url = 'https://api.ai.qq.com/fcgi-bin/face/face_detectface'
    res = requests.post(url, get_params(img_encode)).json()  # Request the URL for json information
    # Display information on pictures
    if res['ret'] == 0:  # 0 Represents a successful request.
        pil_img = Image.fromarray(cv2.cvtColor(frame, cv2.COLOR_BGR2RGB))  # Converting opencv format to PIL format for easy writing of Chinese characters
        draw = ImageDraw.Draw(pil_img)
        for obj in res['data']['face_list']:
            img_width = res['data']['image_width']  # Image width
            img_height = res['data']['image_height']  # Image Height
            # print(obj)
            x = obj['x']  # x-coordinates of upper left corner of face frame
            y = obj['y']  # y coordinates in the upper left corner of the face frame
            w = obj['width']  # Face Frame Width
            h = obj['height']  # Face Frame Height
            # Customize the text content displayed based on the returned value
            if obj['glass'] == 1:  # glasses
                glass = 'Yes'
            else:
                glass = 'nothing'
            if obj['gender'] >= 70:  # Sex value from 0 to 100 indicates from female to male
                gender = 'male'
            elif 50 <= obj['gender'] < 70:
                gender = "Niang"
            elif obj['gender'] < 30:
                gender = 'female'
            else:
                gender = 'Tough girl'
            if 90 < obj['expression'] <= 100:  # The expression is 0-100, indicating the degree of laughter.
                expression = 'A single smile would overthrow a city'
            elif 80 < obj['expression'] <= 90:
                expression = 'Be wild with joy'
            elif 70 < obj['expression'] <= 80:
                expression = 'Be jubilant'
            elif 60 < obj['expression'] <= 70:
                expression = 'Look cheerful'
            elif 50 < obj['expression'] <= 60:
                expression = 'Look very happy'
            elif 40 < obj['expression'] <= 50:
                expression = 'Jubilant'
            elif 30 < obj['expression'] <= 40:
                expression = 'A smile has driven all the hard lines in his face and brightened his countenance'
            elif 20 < obj['expression'] <= 30:
                expression = 'A faint smile on one's face'
            elif 10 < obj['expression'] <= 20:
                expression = 'Half a voice half a joy'
            elif 0 <= obj['expression'] <= 10:
                expression = 'Feel dejected unwittingly'
            delt = h // 5 # Determine Vertical Distance of Text
            # Write pictures
            if len(res['data']['face_list']) > 1:  # When multiple faces are detected, the information is written into the face frame.
                font = ImageFont.truetype('yahei.ttf', w // 8, encoding='utf-8') # Download font files in advance
                draw.text((x + 10, y + 10), 'Gender :' + gender, (76, 176, 80), font=font)
                draw.text((x + 10, y + 10 + delt * 1), 'Age :' + str(obj['age']), (76, 176, 80), font=font)
                draw.text((x + 10, y + 10 + delt * 2), 'Expression :' + expression, (76, 176, 80), font=font)
                draw.text((x + 10, y + 10 + delt * 3), 'charm :' + str(obj['beauty']), (76, 176, 80), font=font)
                draw.text((x + 10, y + 10 + delt * 4), 'glasses :' + glass, (76, 176, 80), font=font)
            elif img_width - x - w < 170:  # Avoid image too narrow, resulting in incomplete text display
                font = ImageFont.truetype('yahei.ttf', w // 8, encoding='utf-8')
                draw.text((x + 10, y + 10), 'Gender :' + gender, (76, 176, 80), font=font)
                draw.text((x + 10, y + 10 + delt * 1), 'Age :' + str(obj['age']), (76, 176, 80), font=font)
                draw.text((x + 10, y + 10 + delt * 2), 'Expression :' + expression, (76, 176, 80), font=font)
                draw.text((x + 10, y + 10 + delt * 3), 'charm :' + str(obj['beauty']), (76, 176, 80), font=font)
                draw.text((x + 10, y + 10 + delt * 4), 'glasses :' + glass, (76, 176, 80), font=font)
            else:
                font = ImageFont.truetype('yahei.ttf', 20, encoding='utf-8')
                draw.text((x + w + 10, y + 10), 'Gender :' + gender, (76, 176, 80), font=font)
                draw.text((x + w + 10, y + 10 + delt * 1), 'Age :' + str(obj['age']), (76, 176, 80), font=font)
                draw.text((x + w + 10, y + 10 + delt * 2), 'Expression :' + expression, (76, 176, 80), font=font)
                draw.text((x + w + 10, y + 10 + delt * 3), 'charm :' + str(obj['beauty']), (76, 176, 80), font=font)
                draw.text((x + w + 10, y + 10 + delt * 4), 'glasses :' + glass, (76, 176, 80), font=font)

            draw.rectangle((x, y, x + w, y + h), outline="#4CB050")  # Draw face boxes
            cv2img = cv2.cvtColor(np.array(pil_img), cv2.COLOR_RGB2BGR)  # Convert pil format to cv
            cv2.imwrite('faces/{}'.format(os.path.basename(img)), cv2img)  # Save the picture to the face folder
            return 'Successful detection'
    else:
        return 'Detection failure'

At this point, our face detection interface access and image processing are completed. After receiving the image information sent by the user, the function is called and the processed image is returned to the user.

Return the picture to the user
When receiving user pictures, the following steps are required:

Save pictures
After receiving the user's picture, we need to save the picture first, then we can call the face analysis interface and transfer the picture information to the past. We need to write an img_download function to download the picture. See the code below for details.

Calling Face Analysis Interface
After the picture is downloaded, the interface function in the face_id.py file is called to get the processed picture.

Upload pictures
The test result is a new picture. To send the picture to the user, we need a Media_ID. To get the Media_ID, we need to upload the picture as temporary material first. So we need an img_upload function to upload the picture, and we need to use an access_token when uploading. We get it through a function. To get access_token, we must add our own IP address to the whitelist, otherwise we can't get it. Please login "Wechat Public Platform - Development - Basic Configuration" and add the server IP address to the IP whitelist ahead of time. You can view the IP address of this machine at http://ip.qqq.com/**

Starting with the code, we created a new utils.py to download and upload pictures.

import requests
import json
import threading
import time
import os
'''
//What problems do you not understand? Python Learning Exchange Group: 821460695 to meet your needs, information has been uploaded group files, you can download!
'''
token = ''
app_id = 'wxfc6adcdd7593a712'
secret = '429d85da0244792be19e0deb29615128'


def img_download(url, name):
    r = requests.get(url)
    with open('images/{}-{}.jpg'.format(name, time.strftime("%Y_%m_%d%H_%M_%S", time.localtime())), 'wb') as fd:
        fd.write(r.content)
    if os.path.getsize(fd.name) >= 1048576:
        return 'large'
    # print('namename', os.path.basename(fd.name))
    return os.path.basename(fd.name)


def get_access_token(appid, secret):
    '''Obtain access_token,100 Minute refresh'''

    url = 'https://api.weixin.qq.com/cgi-bin/token?grant_type=client_credential&appid={}&secret={}'.format(appid, secret)
    r = requests.get(url)
    parse_json = json.loads(r.text)
    global token
    token = parse_json['access_token']
    global timer
    timer = threading.Timer(6000, get_access_token)
    timer.start()


def img_upload(mediaType, name):
    global token
    url = "https://api.weixin.qq.com/cgi-bin/media/upload?access_token=%s&type=%s" % (token, mediaType)
    files = {'media': open('{}'.format(name), 'rb')}
    r = requests.post(url, files=files)
    parse_json = json.loads(r.text)
    return parse_json['media_id']

get_access_token(app_id, secret)

Return to User
We simply modify the logic after receiving the picture, after receiving the picture, face detection, upload to obtain Media_ID, what we need to do is to return the picture to the user. Look directly at the code for connect.py

import falcon
from falcon import uri
from wechatpy.utils import check_signature
from wechatpy.exceptions import InvalidSignatureException
from wechatpy import parse_message
from wechatpy.replies import TextReply, ImageReply

from utils import img_download, img_upload
from face_id import access_api

'''
//What problems do you not understand? Python Learning Exchange Group: 821460695 to meet your needs, information has been uploaded group files, you can download!
'''
class Connect(object):

    def on_get(self, req, resp):
        query_string = req.query_string
        query_list = query_string.split('&')
        b = {}
        for i in query_list:
            b[i.split('=')[0]] = i.split('=')[1]

        try:
            check_signature(token='lengxiao', signature=b['signature'], timestamp=b['timestamp'], nonce=b['nonce'])
            resp.body = (b['echostr'])
        except InvalidSignatureException:
            pass
        resp.status = falcon.HTTP_200

    def on_post(self, req, resp):
        xml = req.stream.read()
        msg = parse_message(xml)
        if msg.type == 'text':
            print('hello')
            reply = TextReply(content=msg.content, message=msg)
            xml = reply.render()
            resp.body = (xml)
            resp.status = falcon.HTTP_200
        elif msg.type == 'image':
            name = img_download(msg.image, msg.source)  # Download pictures
            print(name)
            r = access_api('images/' + name)
            if r == 'Successful detection':
                media_id = img_upload('image', 'faces/' + name)  # Upload the picture and get media_id
                reply = ImageReply(media_id=media_id, message=msg)
            else:
                reply = TextReply(content='Face detection failed, please upload 1 M The following pictures of clear faces', message=msg)
            xml = reply.render()
            resp.body = (xml)
            resp.status = falcon.HTTP_200

app = falcon.API()
connect = Connect()
app.add_route('/connect', connect)

So far, our work has been finished, and our public number can be used for face detection. Originally, I intended to use it on my public number, but there are still several problems, so I did not use it.

The mechanism of Wechat, our program must give a response within 5 seconds. Otherwise, the service provided by the public number will fail. However, image processing is sometimes slow, often more than 5 seconds. So the correct way to deal with it is to return an empty string immediately after getting the user's request to indicate that we received it, and then create a thread to process the picture separately. When the picture is processed, it is sent to the user through the customer service interface. Unfortunately, the unauthenticated public number has no customer service interface, so there is no way, more than 5 seconds will be wrong.

It is impossible to customize menus. Once custom development is enabled, menus need to be customized. However, unauthorized public numbers have no privilege to configure menus through programs, and can only be configurated in the background of Wechat.

So, I didn't use this program on my public number, but if I have a certified public number, I can try to develop all kinds of interesting functions.

Posted by whizkid on Wed, 21 Aug 2019 00:22:32 -0700