A simple practice of load balancing configuration

Keywords: Nginx Python PHP Session

Welcome to the original self built blog:
http://www.e-lionel.com/index.php/2020/04/10/187/

In recent Q & A projects, there is a need to deploy multiple sets of Python services, and other programs rotate to call interfaces. Because there are multithreaded calls in the calling process, the following load balancing configuration has been made.

First configure upstream to specify load balancer

upstream qa_ask{
    server 192.168.1.100:15995;
	server 192.168.1.100:15996;
	server 192.168.1.100:15997;
	server 192.168.1.100:15998;
	server 192.168.1.100:15999;
    }

upstream qa_ask_sec{
    server 192.168.1.101:16009;
	server 192.168.1.101:16010;
    }

Here you specify two equalizers named
qa_ask: for main environment Q & A
qa_ask_sec: for secondary environment Q & A

If no load balancing algorithm is specified, the default is round robin, that is, it is allocated to the specified back-end servers one by one according to the time sequence. If there is a server down, it will be automatically eliminated without affecting the use of users.

In addition, you can also configure the scheduling status for servers that are not very good. There are several commonly used scheduling statuses:
1. down: indicates that the current server does not participate in the scheduling of load balancing.

2. backup: the reserved standby server will request the standby server when other servers fail or are busy. Therefore, this server has the least access pressure.
3,max_ Failures: the number of failures allowed. The default is 1. You need to cooperate with failures_ Timeout is used together.
4,fail_timeout: when the number of consecutive server failures is Max_ When the maximum value of failures is set, configure the pause time of the server. The default value is 10s. During the next configuration time, nginx will not distribute the request to the failed server

An example is as follows:

upstream qa_ask{
    server 192.168.1.100:15995 down;
	server 192.168.1.100:15996 weight=2;
	server 192.168.1.100:15997 weight=3 max_fails=3 fail_timeout=20s;
	server 192.168.1.100:15998 weight=4 max_fails=3 fail_timeout=20s;
	server 192.168.1.100:15999 backup;
    }

upstream qa_ask_sec{
    server 192.168.1.101:16009; #Default weight=1
	server 192.168.1.101:16010; #Default weight=1
    }

The processing power of each python service in this project is the same, so the default algorithm can be used instead of special configuration.

Configure the equalizer and start to configure the server

server {
    listen       80 ;
    root         /usr/share/nginx/html;

    add_header Access-Control-Allow-Origin "*";
    add_header Access-Control-Allow-Headers: Origin,X-Requested-With,Content-Type,Accept;

	location /ask/ {
            proxy_pass http://qa_ask/;
            proxy_set_header Host $host;
            proxy_set_header X-Real-IP $remote_addr;
            proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        }
	location /asksec/ {
            proxy_pass http://qa_ask_sec/;
            proxy_set_header Host $host;
            proxy_set_header X-Real-IP $remote_addr;
            proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        }
    error_page 404 /404.html;
        location = /40x.html {
    }

    error_page 500 502 503 504 /50x.html;
        location = /50x.html {
    }

}

Use location /ask / to configure the path
proxy_ pass http://qa_ Ask / designated equalizer

As configured above, you can access / ask / to access QA_ Servers configured in ask
Visit / asksec to access QA_ ask_ Server configured in Sec

The complete configuration of the above parts is as follows

upstream qa_ask{
    server 192.168.1.100:15995;
	server 192.168.1.100:15996;
	server 192.168.1.100:15997;
	server 192.168.1.100:15998;
	server 192.168.1.100:15999;
    }

upstream qa_ask_sec{
    server 192.168.1.101:16009;
	server 192.168.1.101:16010;
    }
	
server {
    listen       80 ;
    root         /usr/share/nginx/html;

    add_header Access-Control-Allow-Origin "*";
    add_header Access-Control-Allow-Headers: Origin,X-Requested-With,Content-Type,Accept;

	location /ask/ {
            proxy_pass http://qa_ask/;
            proxy_set_header Host $host;
            proxy_set_header X-Real-IP $remote_addr;
            proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        }
	location /asksec/ {
            proxy_pass http://qa_ask_sec/;
            proxy_set_header Host $host;
            proxy_set_header X-Real-IP $remote_addr;
            proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        }
    error_page 404 /404.html;
        location = /40x.html {
    }

    error_page 500 502 503 504 /50x.html;
        location = /50x.html {
    }

}

The following algorithms can be used for load balancing:
1. Polling (default): the algorithm used by the above configuration.
2. Weight: Specifies the weight of polling. The larger the weight value is, the higher the probability of allocation. It can be used when the performance of the server is unbalanced. The weight of the high-performance server can be configured a little larger, so as to improve the overall performance.
3,ip_hash: allocate servers according to the hash results of the visited IP, so that tenants of the same IP can access the same back-end server, which can effectively solve the session sharing problem of dynamic web pages.
4. fair (third party): upstream of Nginx is required_ fair module. It is allocated according to page size and loading time, and priority is given to those with short response time.
5,url_hash (third party): the hash package of Nginx is required. According to the hash of the access URL to allocate the request, the same URL can be directed to the same back-end server, thus improving the efficiency of the back-end cache server.

Posted by DJTim666 on Mon, 15 Jun 2020 01:04:07 -0700