Talk about how Jmeter executes Python scripts concurrently

Keywords: Python Pycharm Programmer software testing IT

1. Preface

Hello, I'm test Jun!

Recently, a small partner left a message to me in the background, saying that he wrote an Api interface for uploading large files with Django. Now he wants to test the stability of interface concurrency locally and ask me if I have a good scheme

This article takes file upload as an example to talk about the complete process of Jmeter executing Python scripts concurrently

2. Python realizes file upload

There are three steps to upload large files:

  • Get file information and number of slices
  • Segment, slice, and upload - API
  • File merge - API
  • File path parameterization

2-1 obtaining file information and number of slices

First, get the size of the file

Then, the total number of segments is obtained by using the preset slice size

Finally, get the file name and md5 value

import os
import math
import hashlib

def get_file_md5(self, file_path):
    """Gets the name of the file md5 value"""
    with open(file_path, 'rb') as f:
         data = f.read()
         return hashlib.md5(data).hexdigest()

def get_filename(self, filepath):
    """Gets the original name of the file"""
    # File name with suffix
    filename_with_suffix = os.path.basename(filepath)
    # file name
    filename = filename_with_suffix.split('.')[0]
    # Suffix
    suffix = filename_with_suffix.split('.')[-1]
    return filename_with_suffix, filename, suffix

def get_chunk_info(self, file_path):
    """Get segmentation information"""
    # Get total file size (bytes)
    file_total_size = os.path.getsize(file_path)
    print(file_total_size)

    # Total number of segments
    total_chunks_num = math.ceil(file_total_size / self.chunk_size)
    # File name (with suffix)
    filename = self.get_filename(file_path)[0]
    # md5 value of the file
    file_md5 = self.get_file_md5(file_path)
    return file_total_size, total_chunks_num, filename, file_md5

2-2 slicing and segmented uploading

Using the total number and size of segments, slice the file and call the segment file upload interface

import requests

def do_chunk_and_upload(self, file_path):
​    """Process the files in sections and upload them"""
    file_total_size, total_chunks_num, filename, file_md5 = self.get_chunk_info(file_path)

    # ergodic
    for index in range(total_chunks_num):
        print('The first{}File upload'.format(index + 1))
        if index + 1 == total_chunks_num:
            partSize = file_total_size % chunk_size
        else:
            partSize = chunk_size

        # File offset
        offset = index * chunk_size

        # Generate slice id, starting from 1
        chunk_id = index + 1

        print('Start preparing to upload files')
        print("Slice id:", chunk_id, "File offset:", offset, ",Current slice size:", partSize, )

        # Segmented upload file
        self.__upload(offset, chunk_id, file_path, file_md5, filename, partSize, total_chunks_num)

def __upload(self, offset, chunk_id, file_path, file_md5, filename, partSize, total):
    """Upload files in batches"""
    url = 'http://**/file/brust/upload'
    params = {'chunk': chunk_id,
                'fileMD5': file_md5,
                'fileName': filename,
                'partSize': partSize,
                'total': total
                }
    # Read the binary data of the file according to the file path and offset
    current_file = open(file_path, 'rb')
    current_file.seek(offset)

    files = {'file': current_file.read(partSize)}
    resp = requests.post(url, params=params, files=files).text
    print(resp)

2-3 consolidated documents

Finally, the interface of merging files is called, and the large files are segmented into small files.

def merge_file(self, filepath):
        """merge"""
        url = 'http://**/file/brust/merge'
        file_total_size, total_chunks_num, filename, file_md5 = self.get_chunk_info(filepath)
​        payload = json.dumps(
            {
                "fileMD5": file_md5,
                "chunkTotal": total_chunks_num,
                "fileName": filename
            }
        )
        print(payload)
        headers = {
            "Content-Type": "application/json"
        }
        resp = requests.post(url, headers=headers, data=payload).text
        print(resp)

2-4 file path parameterization

For concurrent execution, the file upload path is parameterized

# fileupload.py
...
if __name__ == '__main__':
    filepath = sys.argv[1]

    # Size of each slice (MB)
    chunk_size = 2 * 1024 * 1024

    fileApi = FileApi(chunk_size)
    # Segmented upload
    fileApi.do_chunk_and_upload(filepath)

    # merge
    fileApi.merge_file(filepath)

3. Jmeter concurrent execution

Before using Jmeter to create concurrent processes, we need to write batch scripts

When executing a batch script, you need to follow the file path

# cmd.bat

@echo off
set filepath=%1

python  C:\Users\xingag\Desktop\rpc_demo\fileupload.py %*

Then, create a new CSV file locally and write multiple file paths

# Prepare multiple file paths (csv)
C:\\Users\\xingag\\Desktop\\charles-proxy-4.6.1-win64.msi
C:\\Users\\xingag\\Desktop\\V2.0.pdf
C:\\Users\\xingag\\Desktop\\HBuilder1.zip
C:\\Users\\xingag\\Desktop\\HBuilder2.zip

Then, you can use Jmeter to create concurrent processes

The complete steps are as follows:

  • Create a test plan and add a thread group below

Here, the number of thread groups can be consistent with the number of files above

  • Under thread group, add synchronization timer

The number of simulated user groups in the synchronization timer is consistent with the number of parameters above

  • Add CSV data file settings

Point to the csv data file prepared above, set the file format to UTF-8 and the variable name to file_path. Finally, set the thread sharing mode to "current thread group"

  • Add debugging sampler to facilitate debugging

  • Add OS process sampler

Select the batch file created above, and set the command line parameter to ${file_path}

  • Add view results

4. Finally

Run the Jmeter concurrent process created above, and you can view the results of concurrent uploaded files in the number of results

Of course, we can increase the number of concurrency to simulate real usage scenarios. We only need to modify the CSV data source and Jmeter parameters

If you think the article is good, please praise, share and leave a message, because this will be the strongest driving force for me to continue to output more high-quality articles!

Finally, we can pay attention to the official account: the sad spicy strip! There are many data sharing! The materials are the knowledge points that the interviewer must ask during the interview, and also include many common knowledge in the testing industry, including basic knowledge, Linux essentials, Shell, Internet program principles, Mysql database, special topics of packet capture tools, interface testing tools, advanced testing Python programming, Web automation testing, APP automation testing, interface automation testing Test advanced continuous integration, test architecture development, test framework, performance test, security test, etc.

If my blog is helpful to you and you like my blog content, please click "like", "comment" and "collect" for three times!

Haowen recommendation

Job transfer interview, job hopping interview, these interview skills that software testers must know!

Interview experience: move bricks in first tier cities! Another software testing post, 5000 will be satisfied

Interviewer: I've worked for three years and have a preliminary test? I'm afraid your title of software test engineer should be enclosed in double quotation marks

What kind of person is suitable for software testing?

The man who left work on time was promoted before me

The test post changed jobs repeatedly and disappeared

Posted by Cheers on Thu, 09 Sep 2021 20:13:04 -0700