A note on the problem of non aware updating program of nginx+tomcat cluster users

Keywords: Programming Tomcat Nginx firewall

Before, nginx+tomcat cluster configuration was a simple polling mechanism

upstream    servers{
        #server  172.25.67.29:9091 weight=1 max_fails=1 fail_timeout=50;
        server  172.25.67.29:9091;
        server 172.25.67.27:9091;
        server 172.25.67.27:8380;
    }

As shown above. In this way, it's OK to restart one of the tomcat. However, when the service is started, it loads faster due to the large size of the service. Then tomcat initializes the port once it is started and causes half of the project to start. If the user accesses, it may cause the request to wait for a response.

At this time, the user is generally ignorant. Then I guess I'll call and say that the system is slow... Some key requests, such as payment, may have problems.

So what can I do to let nginx down the service when it starts?

 upstream    servers{
        #server  172.25.67.29:9091 weight=1 max_fails=1 fail_timeout=50;
        server  172.25.67.29:9091 max_fails=10 fail_timeout=100;
        server 172.25.67.27:9091 max_fails=10 fail_timeout=100;
        server 172.25.67.27:8380 max_fails=10 fail_timeout=100;

#Max? Failures indicates how many times an error occurs in accessing the service, and then the service drops the fail? Timeout time. This is to let the service down for 100 seconds in case of 10 errors. The default is 1 time 
#max_fails  What are the mistakes max_fails Now I've tried one[error] 26726#0: *130141 connect() failed (111: Connection refused) while connecting to upstream.  And upstream timed out (110: connection timed out) while reading response header from upstream
    }
 location ^~ /servers {
            proxy_pass http://servers;
            proxy_redirect off;
                #       proxy_redirect http:// https://; 
            proxy_set_header Host $host:$server_port;
            proxy_set_header  X-Real-IP  $remote_addr;
            proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
            client_max_body_size 50m;
            client_body_buffer_size 256k;
            proxy_connect_timeout 300s; #The connection to the server timed out. It's useless in the Internet
            proxy_send_timeout 300s;   #Back end server data return time is useful when the back-end server has to transmit all the data within the specified time, which generally returns a large amount of data. It's generally useless
            proxy_read_timeout 100s;  #It can be said that it is the time for the back-end server to process the request. After this time, if the back end doesn't return, will it throw 504 or 502? 
            proxy_buffer_size 16k;
            proxy_buffers 4 32k;
            proxy_busy_buffers_size 64k;
            proxy_temp_file_write_size 64k;
            #Proxy? Next? Upstream error timeout HTTP? 504 HTTP? 500; no resend error defined. There's no need to see 500 servers. It's hard to say anything else
        }

One of the problems with doing this above is that the retry time is hard to define. If the service starts in 10s, you have time to wait for 90s. just so so. There are also requests that have timed out for a long time, but the service is OK. In this way, the service will also be down. In case that the user is fooling around, it will time out. Then hang up all the services. Baidu has a lot of solutions.

I secretly thought of a plan. Use the server firewall. Disable the service port with the firewall when shutting down the service. Wait until the startup is completed, and then release the port.. This method is also troublesome enough.

nginx solved it. The tomcat problem has not been solved. There are many slow requests in the service. Then the requests to close tomcat are interrupted.

Baidu once said to use kill-15 process number or shutdown.sh, but it's not easy to test. It's elegant, but it doesn't wait until the request is processed. The service will be off soon. However, transaction rollback will be triggered. Shorter requests can be returned. The longer one is basically useless.

Posted by w.geoghegan on Mon, 04 May 2020 19:49:02 -0700