nginx for load balancing &&A server failover automatically forwards requests to another server in the upstream load balancing pool.
Add a health check-up address to determine if the service is working
Uptream for nginx currently supports four ways of allocation
- Polling (default) Each request is assigned to a different back-end server one by one in a chronological order, which can be automatically excluded if the back-end server is down loaded.
- Weightt specifies the polling probability, which is proportional to the access ratio and is used for uneven performance on back-end servers.
- ip_hash Each request is assigned by the hash result of accessing ip, so that each visitor has a fixed access to a back-end server, which can solve the session problem.
- fair (third party) allocates requests based on the response time of the back-end server, with preferential allocation of short response times.
- url_hash (third party)
Start with some default parameters
#weigth A parameter represents a weight, and the higher the weight, the greater the probability of being assigned #max_fails Allow requests to fail defaults to 1.When the maximum number of times is exceeded, return proxy_next_upstream Error in module definition #fail_timeout max_fails Time to pause after the next failure. #down Pre-singular server Not participating in load for now #backup All other non backup machine down Or when busy, ask backup machine,This machine will have the lightest pressure. server 120.10.192.72:5050 max_fails=3 fail_timeout=3s weigth=10; server 120.10.157.102:5050 max_fails=3 fail_timeout=3s weigth=5;
1. Open an entrance to a nginx port for load balancing requests e.g. port 80
upstream backend_web{ #ip_hash; #weigth A parameter represents a weight, and the higher the weight, the greater the probability of being assigned #max_fails Allow requests to fail defaults to 1.When the maximum number of times is exceeded, return proxy_next_upstream Error in module definition #fail_timeout max_fails Time to pause after the next failure. #down Pre-singular server Not participating in load for now #backup All other non backup machine down Or when busy, ask backup machine,This machine will have the lightest pressure. server xxxxx:5050; #gametest1 service server xxxxx:5050; #gametest2 service } server { listen 80; server_name xx.xx.xx.xx; #If there is no domain name, enter it here ip address access_log /opt/nginx_logs/load_gameserver/access.log main; error_log /opt/nginx_logs/load_gameserver/access.log; index index.html index.htm index.php; location /{ proxy_pass http://backend_web; proxy_connect_timeout 1; proxy_set_header X-Real-IP $remote_addr; proxy_set_header Host $http_host; #Backend Web Servers can be X-Forwarded-For Get User Truth IP proxy_set_header X-Forwarded-For $remote_addr; #Automatically forward requests to backend servers that return errors such as 502, 504, execution timeout, etc. upstream Another server in the load balancing pool for failover. proxy_next_upstream http_502 http_504 http_404 error timeout invalid_header; } }
gametest1 and gametest2 ngin configurations are the same
server { listen 5050; server_name xxxxxx; client_max_body_size 20m; access_log /opt/nginx_logs/gameserver/access.log main; error_log /opt/nginx_logs/gameserver/error.log; location = /platform/get_main_h5 { secure_link $arg_sign,$arg_et; secure_link_md5 "$uri $arg_version_name $arg_channel_name $arg_device_id $arg_et $arg_nonce_str gohell"; if ($secure_link = "") { return 403; } if ($secure_link = "0") { return 410; } if ($arg_sign = "") { return 504; } add_header Access-Control-Allow-Origin *; add_header Access-Control-Allow-Methods POST,GET,OPTIONS; add_header Access-Control-Allow-Headers x-requseted-with,content-type; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-For $remote_addr; proxy_headers_hash_max_size 51200; proxy_headers_hash_bucket_size 6400; set_real_ip_from 0.0.0.0/0; real_ip_header X-Forwarded-For; include uwsgi_params; uwsgi_pass 127.0.0.1:7072; } location / { add_header Access-Control-Allow-Origin *; add_header Access-Control-Allow-Methods POST,GET,OPTIONS; add_header Access-Control-Allow-Headers x-requseted-with,content-type; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-For $remote_addr; proxy_headers_hash_max_size 51200; proxy_headers_hash_bucket_size 6400; set_real_ip_from 0.0.0.0/0; real_ip_header X-Forwarded-For; include uwsgi_params; uwsgi_pass 127.0.0.1:7072; } }
Sharing session s between multiple servers
- ip_hash technology in nginx directs requests for an ip to the same backend, so that a client and a backend under this ip can establish a secure session, which is defined in the upstream configuration
upstream backend { server 127.0.0.1:8080 ; server 127.0.0.1:8081 ; ip_hash; }
Ip_hash is easy to understand, but because IP is the only factor that can be used to allocate backends, ip_hash is defective and cannot be used in some cases:
- Nginx is not the front-end server.ip_hash requires nginx to be the front-end server, otherwise nginx cannot hash based on IP without getting the correct ip.For example, if squid is the front-end, nginx can only get the server IP address of squid when taking ip. It is definitely confusing to use this address for shunting.
- There are other ways of load balancing on the back end of nginx.If the nginx backend has other load balancing and the requests are diverted in another way, the requests from one client cannot be located on the same session application server.In this case, the nginx backend can only point directly to the application server, or take another squid and point to the application server.The best way to do this is to use location to shunt once, with some requests for sessions being shunted through ip_hash, and the rest going to the back end.