Locust stress test

Keywords: Python Testing performance locust

(the catalog is on the right →

Official website


configuration parameter

Code file address of this article

The preparation section is of personal interest. You can directly see the use section of locust. You can find a public interface test, such as



Start a database

# Lifting container
docker run -itd --name test_db -p 3396:3306 -e MYSQL_ROOT_PASSWORD=123456 mariadb
# Sign in
mysql -h192.168.1.105 -P3396 -uroot -p123456
# Create database
MariaDB [(none)]> create database tdb;

Make some data

import pymysql
import numpy as np

HOST = ''
PORT = 3396
USER = 'root'
PWD = '123456'
DB = 'tdb'
TABLE = 'employee'

def getEmployee(bit=8):
    chars = [chr(i) for i in range(65, 91)] + [chr(i) for i in range(97, 123)]
    name = ''.join(np.random.choice(chars, bit))
    age = np.random.randint(18, 60)
    sex = np.random.choice(['0', '1'])
    return str((0, name, age, sex))

def dataGen():
    database = pymysql.connect(user=USER, password=PWD, host=HOST, port=PORT, database=DB, charset='utf8')
    cursor = database.cursor()

    # Create table
    sql_ct = "CREATE TABLE IF NOT EXISTS {} ( " \
             "eid INT AUTO_INCREMENT, " \
             "ename  VARCHAR(20) NOT NULL, " \
             "age INT, " \
             "sex VARCHAR(1), " \
             "PRIMARY KEY(eid))".format(TABLE)

    for i in range(100000):

        employee = getEmployee()
        sql_i = "INSERT INTO {} VALUE {}".format(TABLE, employee)
        if (i+1)%100==0:
            print('\r[{}/100000]'.format(i+1), end='')


if __name__ == '__main__':

The constructed data are as follows

Start a service

Make an http service,

import pymysql
from socketserver import ThreadingMixIn
from http.server import HTTPServer
from http.server import SimpleHTTPRequestHandler
from sys import argv
import logging

HOST = ''
PORT = 3396
USER = 'root'
PWD = '123456'
DB = 'tdb'
TABLE = 'employee'

def dataSelect(eid):
    database = pymysql.connect(user=USER, password=PWD, host=HOST, port=PORT, database=DB, charset='utf8')
    cursor = database.cursor()

    sql_s = "SELECT * FROM {} WHERE eid={}".format(TABLE, eid)
    res = cursor.fetchall()  # Obtain results
    if res:
        ee = res[0]
        return {'name': ee[1], 'age': ee[2], 'sex': 'female' if ee[3]=='0' else 'male'}
    return 'no this employee by eid={}'.format(eid)

class ThreadingServer(ThreadingMixIn, HTTPServer):

class RequestHandler(SimpleHTTPRequestHandler):
    def do_GET(self):
        self.send_header('Content-type', 'text/plain;charset=utf-8')
            eid = int(self.path[1:])
            eid = -1
        response = dataSelect(eid)'request of {} by eid={}'.format(response, eid))

def run(server_class=ThreadingServer, handler_class=RequestHandler, port=8888):
    server_address = ('', port)
    httpd = server_class(server_address, handler_class)'server start in http://{}:{}'.format(*server_address))
    except KeyboardInterrupt:

def main():
    logging.basicConfig(level=logging.INFO, format='%(levelname)-8s %(asctime)s: %(message)s',
                        datefmt='%m-%d %H:%M')
    if len(argv) == 2:

if __name__ == '__main__':

Run the python script to access the service

locust pressure test


pip install locust

python based test script

from locust import HttpUser, between, task
import numpy as np

class StressHandler(HttpUser):							# HttpUser class that inherits locust
    wait_time = between(0, 0)   						# Send a random delay p1-p2 seconds before the request

    @task   											# The function to which this annotation is added will be executed
    def testApi(self):
	      # Random id
        eid = np.random.randint(1, 100000)	
        path = '/{}'.format(eid)
        # Just pass path here, and the domain name will be passed in from the web front end
    # @task
    def testBaidu(self):
        path = '/'

Run locust

locust -f

The operation effect is as follows

Then, you can access it from the front end. Replace with the ip of the machine that starts locust or If you don't make it yourself, the default port is 8089.

Create a new task. Note that if host is not a domain name, you need to specify a port.

After startup

Further (multi process)

In a real scenario, the pressure generation capacity will be limited by the current machine performance. CPU and bandwidth are common constraints, and it may not be easy to improve the bandwidth. Rational use of multi-core CPU can further improve the pressure generation intensity. Lockus also provides such support, and even distributed pressure generation can be realized through multiple machines.

You can use the top command to observe CPU and memory usage

iftop observes bandwidth usage

iostat can observe the disk status (not used here)

locust -f --master		# master
# Start another console
locust -f --worker		# worker
# You can restart the console to start multiple worker s

There are two workers here. The number of users and the speed of user increase will be divided equally into two workers.

Further (profile)

Can be added to configure There are many items, which can be configured uniformly through the configuration file.

# master.conf
locustfile =
master = true
web-port = 6789
print-stats = true
only-summary = true
locust --config master.conf
# worker.conf
locustfile =
headless = true
worker = true
# master-host =
# master-port = 5557
# Need a new command line window to start
locust --config salve.conf

The interaction port between master and worker is 5557. If the ports conflict, please refer to the configuration document to modify the ports.

Go further (skip the front-end task)

It can be configured to write the parameters directly in the configuration file without passing in the parameters through the web, and the results will be obtained after startup.

  # master.conf
  locustfile =
  master = true
  web-port = 6789
  host =
  users = 9
  spawn-rate = 3
  run-time = 20s
  headless = true                 		# It is not started at the front end, and the test parameters need to be passed in with the above configuration
  csv = ./data/csv_prefix		        # Generate CSV result files, _stats.csv, _stats_history.csv and _failures.csv
  print-stats = true			    	# Print status on console
  html = ./data/web_report.html			# html front end report
  print-stats = true
  only-summary = true
  # worker.conf
  locustfile =
  headless = true
  worker = true
  # master-host =
  # master-port = 5557
locust --conf worker.conf

Improve the script (multiple worker s)

The startup process can also be started with the help of python's multithreading.

import os
import subprocess

WORK_DIR = os.path.abspath(os.path.dirname(__file__))

def runMaster():
    print('start Yes master')'locust --config ./data/master.conf', shell=True, cwd=WORK_DIR)

def runWorker():
    print('start One worker')
    subprocess.Popen('locust --config ./data/worker.conf', shell=True, cwd=WORK_DIR)

def clear():
    # Clean up the tasks with 5557, which is the default interaction port between master and worker"kill -9 $(lsof -i:5557 | awk 'NR>1{print $2}')", shell=True)
    print('Cleared the task')

def main():
    for i in range(N_WORKER):

if __name__ == '__main__':

A complete demo

Finally, how can I use this to start a pressure test( Code file address of this article)

Take this url as an example 		#  The function is Baidu search "test"
path:	 /s
params:	 wd=test

Modify test script

  • Add such a task in

        def testSearch(self):
            path = '/s?wd=test'
  • Modify the configuration files (data/master.conf, data/worker.conf). If not from the front end, the configuration files here use the configuration files in "further"

  • Modify the startup file, mainly to modify the number of worker processes. The default is 3,

    N_WORKER = 3
  • Start and python3, and the report will be generated in the data directory

Posted by CodeBuddy on Tue, 23 Nov 2021 13:27:38 -0800