Article directory
Summary
At present, the performance of the platform is found to be relatively low in the process of using, so we need to find a way to optimize the performance.
Tools used
Siege is an HTTP load test and benchmark tool. It's designed to allow web developers to measure their code under duress to see how it will stand up and load onto the Internet. Siege supports basic authentication, cookies, HTTP, HTTPS and FTP protocols. It allows the user to access the server using a configurable number of emulated clients. These customers place the server in the "under siege".
To put it bluntly, Siege is a multithreaded http server stress testing tool. The official website Here , latest version 3.1.4. How to install it can be viewed on the official website. It seems that the official website hasn't been updated for a long time. The siege I installed under the MAC has reached version 4.0.4. You can install brew directly on Mac.
brew install siege siege SIEGE 4.0.4 Usage: siege [options] siege [options] URL siege -g URL Options: -V, --version VERSION, prints the version number. -h, --help HELP, prints this section. -C, --config CONFIGURATION, show the current config. -v, --verbose VERBOSE, prints notification to screen. -q, --quiet QUIET turns verbose off and suppresses output. -g, --get GET, pull down HTTP headers and display the transaction. Great for application debugging. -p, --print PRINT, like GET only it prints the entire page. -c, --concurrent=NUM CONCURRENT users, default is 10 -r, --reps=NUM REPS, number of times to run the test. -t, --time=NUMm TIMED testing where "m" is modifier S, M, or H ex: --time=1H, one hour test. -d, --delay=NUM Time DELAY, random delay before each requst -b, --benchmark BENCHMARK: no delays between requests. -i, --internet INTERNET user simulation, hits URLs randomly. -f, --file=FILE FILE, select a specific URLS FILE. -R, --rc=FILE RC, specify an siegerc file -l, --log[=FILE] LOG to FILE. If FILE is not specified, the default is used: PREFIX/var/siege.log -m, --mark="text" MARK, mark the log file with a string. between .001 and NUM. (NOT COUNTED IN STATS) -H, --header="text" Add a header to request (can be many) -A, --user-agent="text" Sets User-Agent in request -T, --content-type="text" Sets Content-Type in request --no-parser NO PARSER, turn off the HTML page parser --no-follow NO FOLLOW, do not follow HTTP redirects Copyright (C) 2017 by Jeffrey Fulmer, et al. This is free software; see the source for copying conditions. There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
Several common commands are directly given, and the meaning of each parameter on the command line can be seen in reference 1.
# get request siege -c 1000 -r 100 -b url # post request siege -c 1000 -r 100 -b url POST {\"accountId\":\"123\",\"platform\":\"ios\"}"
test
Test code
Take a look at the file tree structure, tree
➜ flask tree . ├── hello1.py ├── hello1.pyc ├── hello2.py ├── hello2.pyc ├── hello3.py └── templates └── hello.html
The following is a piece of flag code that does not use a template but only returns a string.
# file hello1.py from flask import Flask app = Flask(__name__) @app.route('/') def hello_world(): return 'Hello, World!' app.run(debug=False, threaded=True, host="127.0.0.1", port=5000)
Here is a piece of Flask code that uses the template file.
# file hello2.py from flask import Flask,render_template app = Flask(__name__) @app.route('/hello/') @app.route('/hello/<name>') def hello(name=None): return render_template('hello.html', name=name) app.run(debug=False, threaded=True, host="127.0.0.1", port=5000)
hello.html file
<!doctype html> <title>Hello from Flask</title> {% if name %} <h1>Hello {{ name }}!</h1> {% else %} <h1>Hello, World!</h1> {% endif %}
Flash runs directly
First look at the test results of hello1.py
# 100 concurrent siege -c 100 -r 10 -b http://127.0.0.1:5000 Transactions: 1000 hits Availability: 100.00 % Elapsed time: 1.17 secs Data transferred: 0.01 MB Response time: 0.11 secs Transaction rate: 854.70 trans/sec Throughput: 0.01 MB/sec Concurrency: 92.12 Successful transactions: 1000 Failed transactions: 0 Longest transaction: 0.14 Shortest transaction: 0.01 # 200 concurrent # siege -c 200 -r 10 -b http://127.0.0.1:5000 Transactions: 1789 hits Availability: 89.45 % Elapsed time: 2.26 secs Data transferred: 0.02 MB Response time: 0.17 secs Transaction rate: 791.59 trans/sec Throughput: 0.01 MB/sec Concurrency: 134.37 Successful transactions: 1789 Failed transactions: 211 Longest transaction: 2.09 Shortest transaction: 0.00 # 1000 concurrent siege -c 1000 -r 10 -b http://127.0.0.1:5000 Transactions: 10000 hits Availability: 100.00 % Elapsed time: 16.29 secs Data transferred: 0.12 MB Response time: 0.00 secs Transaction rate: 613.87 trans/sec Throughput: 0.01 MB/sec Concurrency: 2.13 Successful transactions: 10000 Failed transactions: 0 Longest transaction: 0.08 Shortest transaction: 0.00
I don't know why the availability rate will decline at 200, but from the general trend, we can see that the access rate has been falling again and again, and it has reached 613/s at 1000 concurrent.
Looking at the second code
# 100 concurrent siege -c 100 -r 10 -b http://127.0.0.1:5000/hello/libai Transactions: 1000 hits Availability: 100.00 % Elapsed time: 1.26 secs Data transferred: 0.07 MB Response time: 0.12 secs Transaction rate: 793.65 trans/sec Throughput: 0.06 MB/sec Concurrency: 93.97 Successful transactions: 1000 Failed transactions: 0 Longest transaction: 0.14 Shortest transaction: 0.04 # 200 concurrent siege -c 200 -r 10 -b http://127.0.0.1:5000/hello/libai Transactions: 1837 hits Availability: 91.85 % Elapsed time: 2.52 secs Data transferred: 0.13 MB Response time: 0.18 secs Transaction rate: 728.97 trans/sec Throughput: 0.05 MB/sec Concurrency: 134.77 Successful transactions: 1837 Failed transactions: 163 Longest transaction: 2.18 Shortest transaction: 0.00 # 1000 concurrent siege -c 1000 -r 10 -b http://127.0.0.1:5000/hello/libai Transactions: 10000 hits Availability: 100.00 % Elapsed time: 17.22 secs Data transferred: 0.70 MB Response time: 0.01 secs Transaction rate: 580.72 trans/sec Throughput: 0.04 MB/sec Concurrency: 7.51 Successful transactions: 10000 Failed transactions: 0 Longest transaction: 0.09 Shortest transaction: 0.00
Other ways
Refer to the recommended Deployment mode Test.
>Although it is light and easy to use, Flask's built-in server is not suitable for production and it cannot be extended well. This article mainly explains some methods of using Flask correctly in production environment.
If you want to deploy the Flask application to the WSGI server not listed here, please query its documentation about how to use WSGI. Just remember: the Flask application object is essentially a WSGI application.
Here are a few of the official methods for performance testing.
Gunicorn
Gunicorn 'Green Unicorn' is a WSGI HTTP server under UNIX. It is a pre fork worker model transplanted from Ruby's Unicorn project. It supports both eventlet s and greenlet s. Running the Flask application on gunicorn is very simple:
gunicorn myproject:app
Of course, in order to use gunicorn, we first need pip install gunicorn to install gunicorn. To start hello1.py with gunicorn, you need to add the following code
app.run(debug=False, threaded=True, host="127.0.0.1", port=5000)
Delete it. Then execute the command
# Where - w is to open n processes - b is to bind ip and port gunicorn hello1:app -w 4 -b 127.0.0.1:4000
By default, gunicorn uses the synchronous blocking network model (- k sync), which may not perform well for large concurrent access. It also supports other better modes, such as gevent or meinheld. So we can replace the blocking model with gevent.
# Where - w is to open n processes - b is to bind ip and port - k is to replace the blocking model with gevent gunicorn hello1:app -w 4 -b 127.0.0.1:4000 -k gevent
Next, I test four cases of 1000 concurrent 10 accesses, one process, four process gevnent and non gevent models, and see the results.
Before testing, make sure to set the ulimit value to a higher value. Otherwise, the error of Too many open files will be reported. I set it to 65535
ulimit 65535
gunicorn hello1:app -w 1 -b 127.0.0.1:4000 siege -c 1000 -r 10 -b http://127.0.0.1:4000 Transactions: 10000 hits Availability: 100.00 % Elapsed time: 15.21 secs Data transferred: 0.12 MB Response time: 0.00 secs Transaction rate: 657.46 trans/sec Throughput: 0.01 MB/sec Concurrency: 0.85 Successful transactions: 10000 Failed transactions: 0 Longest transaction: 0.01 Shortest transaction: 0.00 //As you can see, a single process is slightly better than a direct flash startup. gunicorn hello1:app -w 4 -b 127.0.0.1:4000 siege -c 1000 -r 10 -b http://127.0.0.1:4000 Transactions: 10000 hits Availability: 100.00 % Elapsed time: 15.19 secs Data transferred: 0.12 MB Response time: 0.00 secs Transaction rate: 658.33 trans/sec Throughput: 0.01 MB/sec Concurrency: 0.75 Successful transactions: 10000 Failed transactions: 0 Longest transaction: 0.01 Shortest transaction: 0.00 # Use gevent, remember pip install gevent gunicorn hello1:app -w 1 -b 127.0.0.1:4000 -k gevent Transactions: 10000 hits Availability: 100.00 % Elapsed time: 15.20 secs Data transferred: 0.12 MB Response time: 0.00 secs Transaction rate: 657.89 trans/sec Throughput: 0.01 MB/sec Concurrency: 1.33 Successful transactions: 10000 Failed transactions: 0 Longest transaction: 0.02 Shortest transaction: 0.00 gunicorn hello1:app -w 4 -b 127.0.0.1:4000 -k gevent Transactions: 10000 hits Availability: 100.00 % Elapsed time: 15.51 secs Data transferred: 0.12 MB Response time: 0.00 secs Transaction rate: 644.75 trans/sec Throughput: 0.01 MB/sec Concurrency: 1.06 Successful transactions: 10000 Failed transactions: 0 Longest transaction: 0.28 Shortest transaction: 0.00
It can be seen that when the concurrency number is 1000, it is not obvious to use gunicorn and gene, but when we modify the concurrency number to 100 or 200, we test it
gunicorn hello1:app -w 1 -b 127.0.0.1:4000 -k gevent siege -c 200 -r 10 -b http://127.0.0.1:4000 Transactions: 1991 hits Availability: 99.55 % Elapsed time: 1.62 secs Data transferred: 0.02 MB Response time: 0.14 secs Transaction rate: 1229.01 trans/sec Throughput: 0.02 MB/sec Concurrency: 167.71 Successful transactions: 1991 Failed transactions: 9 Longest transaction: 0.34 Shortest transaction: 0.00 gunicorn hello1:app -w 4 -b 127.0.0.1:4000 -k gevent siege -c 200 -r 10 -b http://127.0.0.1:4000 Transactions: 2000 hits Availability: 100.00 % Elapsed time: 0.71 secs Data transferred: 0.02 MB Response time: 0.04 secs Transaction rate: 2816.90 trans/sec Throughput: 0.03 MB/sec Concurrency: 122.51 Successful transactions: 2000 Failed transactions: 0 Longest transaction: 0.17 Shortest transaction: 0.00
You can see that in the 4 processes, when using gevent, it has reached 2816.
Then test the efficiency of hello2.py under 200 concurrency.
gunicorn hello2:app -w 1 -b 127.0.0.1:4000 -k gevent siege -c 200 -r 10 -b http://127.0.0.1:4000/hello/2 Transactions: 1998 hits Availability: 99.90 % Elapsed time: 1.72 secs Data transferred: 0.13 MB Response time: 0.14 secs Transaction rate: 1161.63 trans/sec Throughput: 0.08 MB/sec Concurrency: 168.12 Successful transactions: 1998 Failed transactions: 2 Longest transaction: 0.35 Shortest transaction: 0.00 gunicorn hello2:app -w 4 -b 127.0.0.1:4000 -k gevent siege -c 200 -r 10 -b http://127.0.0.1:4000/hello/2 Transactions: 2000 hits Availability: 100.00 % Elapsed time: 0.71 secs Data transferred: 0.13 MB Response time: 0.05 secs Transaction rate: 2816.90 trans/sec Throughput: 0.19 MB/sec Concurrency: 128.59 Successful transactions: 2000 Failed transactions: 0 Longest transaction: 0.14 Shortest transaction: 0.0
It can be seen that the efficiency of hello1.py is not the same as that of hello1.py, which has also reached 2800 +, and the performance has been improved by four times.
uWSGI
Official website link uWSGI , please click open for installation link See. On Mac, you can install it directly by using brew install uWSGI. After installation, run it in the website directory
uWSGI --http 127.0.0.1:4000 --module hello1:app
There's not enough time. Let's write it here first
usWSGI and ngnix
For uswgi installation, use pip install uswgi.
Write the configuration file uswgi.ini, which is the configuration file of uswgi.
[uwsgi] # Enable main process or not master = true # The directory of the virtual python environment, that is, the directory where virtualenv generates the virtual environment home = venv # wsgi startup file wsgi-file = manage.py # app object from new in wsgi startup file callable = app # Binding port socket = 0.0.0.0:5000 # Start several processes processes = 4 # Several threads per process threads = 2 # Allowed buffer size buffer-size = 32768 # Accept the agreement, pay attention here!!!!!! This item is required when using uwsgi to start directly. If not, the service can be started, but the browser cannot access it. If only using nginx for proxy access, this item must be deleted, otherwise nginx cannot proxy to the uwsgi service normally. protocol=http
The startup file of uwsgi is manage.py, and hello1 is hello1.py above. Comment out app.run (debug = false, thread = true, host = "127.0.0.1", port = 5000).
from flask import Flask from hello1 import app manager = Manager(app) if __name__ == '__main__': manager.run()
Then start the program with the command uswgi uswgi.ini, and visit the local 127.0.0.1:5000 to see helloworld. Then you need to use it with nginx. After installing nginx, find the configuration file of nginx. If you are using apt or yum to install nginx, the configuration file of nginx is in / etc/nginx/nginx.conf. In order not to affect the global effect, modify the / etc / nginx / sites available / default file here, which is included in / etc/nginx/nginx.conf, so the configuration is also effective . Profile content.
# nginx ip access limit, please refer to reference 6, 7 for details limit_req_zone $binary_remote_addr zone=allips:100m rate=50r/s; server { listen 80 default_server; listen [::]:80 default_server; # nginx ip access limit, please refer to reference 6, 7 for details limit_req zone=allips burst=20 nodelay; root /var/www/html; # Add index.php to the list if you are using PHP index index.html index.htm index.nginx-debian.html; server_name _; # Static file agent, nginx's static file access speed is much faster than other containers. location /themes/ { alias /home/dc/CTFd_M/CTFd/themes/; } # uwsgi configuration location / { include uwsgi_params; uwsgi_pass 127.0.0.1:5000; # python virtualenv path uwsgi_param UWSGI_PYHOME /home/dc/CTFd_M/venv; # Current project path uwsgi_param UWSGI_CHDIR /home/dc/CTFd_M; # Startup file uwsgi_param UWSGI_SCRIPT manage:app; # overtime uwsgi_read_timeout 100; } }
Then start nginx server, visit 127.0.0.1 and you can access it normally. Because there may be a problem with the local configuration, you can't use this method to access the system successfully. The comparison result is that I create a new virtual machine, Ubuntu Server 16.04, 2-core, 2G memory performance, and the web page visited here is not the previous hello1.py test program, but a completed application platform. You can see from the Throughput attribute that it has reached the processing speed of 20+M/s.
# The following two tests are to access the virtual machine environment on the physical machine. The virtual machine environment is Ubuntu Server 16.04 # Start with uswgi siege -c 200 -r 10 -b http://192.168.2.151:5000/index.html Transactions: 56681 hits Availability: 99.90 % Elapsed time: 163.48 secs Data transferred: 3385.71 MB Response time: 0.52 secs Transaction rate: 346.72 trans/sec Throughput: 20.71 MB/sec Concurrency: 180.97 Successful transactions: 56681 Failed transactions: 59 Longest transaction: 32.23 Shortest transaction: 0.05 # After using uswsgi and nginx as static agents siege -c 200 -r 10 -b http://192.168.2.151/index.html Transactions: 53708 hits Availability: 99.73 % Elapsed time: 122.13 secs Data transferred: 3195.15 MB Response time: 0.29 secs Transaction rate: 439.76 trans/sec Throughput: 26.16 MB/sec Concurrency: 127.83 Successful transactions: 53708 Failed transactions: 148 Longest transaction: 103.07 Shortest transaction: 0.00
It can be seen that the use of uswsgi and nginx together can improve some efficiency, from 346 times / s to 439 times / s.