Detailed Nginx common optimization items and optimization parameter settings

Keywords: Linux Operation & Maintenance Nginx Optimize

Optimize Ngxin for increased security and high concurrency

  • By optimizing Nginx settings, Nginx enhances security while supporting more concurrent requests
  • Tune Nginx's host Linux kernel parameters to make them more compliant with Web servers that support high concurrent access

Nginx configuration optimization

  • Edit nginx.conf configuration file

Set nginx multiprocess

  • More concurrency can be achieved by setting the number of nginx worker processes
    worker_processes 8; #Number of startup worker processes, recommended set to the same value of cpu logical cores

Set maximum number of concurrent connections for nginx single worker process

worker_connections 65536; #Set the maximum concurrency acceptable to a single nginx worker process, default 1024, to 10240 or higher

Set nginxCPU affinity binding (auto only after version 1.9)

Worker_cpu_affinity 00000001 00010 00000100 00001000 | auto; #Bind the Nginx worker process to the specified CPU core

Set the maximum number of open files for nginx process

  • The actual number of concurrent connections cannot exceed the maximum number of open files at the system level. Consistent with ulimit-n or limits.conf values
    worker_rlimit_nofile 65536; #The maximum number of files that can be opened by all worker processes.

Open zero copy

sendfile on; #Open sendfile as a web server to speed up static file transfer, commonly referred to as zero-copy, kernel space exchange files

Set Long Connection Timeout

keepalive_timeout 300; #Long connection timeout in seconds, longer timeout
keepalive_requests number; #The maximum number of resources allowed to request on a long connection, which defaults to 100 times, is recommended to be scaled up appropriately, for example:500

Set Open to use epoll model

use epoll; #use epoll event driven

Set on anti-shock group

accept_mutex on; #on is handled by the work er process in turn for one request at a time

Set Open to Accept Multiple Connections with Process

multi_accept on; #on-time each worker process of the Nginx server can accept multiple new network connections at the same time

Set Gzip compression enabled to speed up file transfer

gzip on; #Turn on file compression from the default module ngx_http_gzip_module
gzip_static on; #Turn on pre-compression, from ngx_http_gzip_static_module

Modify the default character set to utf-8 when setting up a virtual host

charset koi8-r; #Set encoding format, default is Russian format, recommend utf-8 instead

Turn on ssl to support https and configure rewrite to jump http to https

  • nginx's https functionality is based on the module ngx_http_ssl_module

Single Domain Name

server {
    listen 80;
    listen 443 ssl;
    ssl_certificate /apps/nginx/certs/www.sunmy.pro.pem;
    ssl_certificate_key /apps/nginx/certs/www.sunmy.pro.key;
    ssl_session_cache shared:sslcache:20m;
    ssl_session_timeout 10m;
    root /data/nginx/html;
}

automatic skip

server {
    listen 80 default_server;
    server_name blog.sunmy.pro;
    rewrite ^(.*)$ https://$server_name$1 permanent;
}
server {
    listen 443 ssl;
    server_name blog.sunmy.pro;
    ssl_certificate /apps/nginx/certs/blog.sunmy.pro.pem;
    ssl_certificate_key /apps/nginx/certs/blog.sunmy.pro.key;
    ssl_session_cache shared:sslcache:20m;
    ssl_session_timeout 10m;
    location / {
        root "/data/nginx/html/mobile";
    }
    location /mobile_status {
        stub_status;
    }
}
server {
    listen 80;
    listen 443 ssl;
    ssl_certificate /apps/nginx/conf/conf.d/www.sunmy.pro.crt;
    ssl_certificate_key /apps/nginx/conf/conf.d/www.sunmy.pro.key;
    ssl_session_cache shared:sslcache:20m;
    ssl_session_timeout 10m;
    server_name www.sunmy.pro;
    error_log /apps/nginx/logs/sunmy.pro_error.log notice;
    access_log /apps/nginx/logs/sunmy.pro_access.log main;
    add_header Strict-Transport-Security "max-age=31536000; includeSubDomains"
    always;
    location / {
        root /data/nginx/html/pc;
    if ( $scheme = http ) {
    rewrite ^/(.*)$ https://www.sunmy.pro/$1 redirect;
    }
}

Set Open IP Transport

  • Add client IP and reverse proxy server IP to request header in configuration file to implement reverse proxy client IP transfer
#proxy_set_header X-Real-IP $remote_addr; #Add client IP only to request header and forward to back-end server
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; #Add client IP and reverse proxy IP to request header

Open and Hide Status Page

  • Implementation of ngx_http_stub_status_module based on nginx module
location /nginx_status {
    stub_status;
    auth_basic "auth login";
    auth_basic_user_file /apps/nginx/conf/.htpasswd;
    allow 192.168.0.0/16;
    allow 127.0.0.1;
    deny all;
}
  • A custom zabbix monitor can be implemented later by scripting
curl http://sun:123456@www.sunmy.pro/nginx_status 2>/dev/null |awk '/Reading/{print $2,$4,$6}'
3 27 185

Hide nginx version number

server_tokens off; #Whether to display nginx version at the beginning of Server in response message

Open anti-theft chain

  • nginx supports checking whether the referer information requested for access is valid for anti-theft chain functionality through the ngx_http_referer_module
server {
    index index.html;
    valid_referers none blocked server_names *.sunmy.pro
    ~\.google\. ~\.baidu\. ~\.bing\. ~\.so\. ~\.dogedoge\. ; #Define a valid referer
    if ($invalid_referer) { #If other invalid referer access is used
        return 403 "Forbidden Access"; #Return status code 403
    }
......
}

Configure location to separate static from dynamic depending on the suffix of the request file

location ~* \.(gif|jpg|jpeg|bmp|png|tiff|tif|ico|wmf|js|css)$ {
root /data/nginx/static;
index index.html;
}

Configuration 404 Page Auto Jump to Home Page

#404 to 302
#error_page 404 /index.html;
error_page 404 =302 /index.html;
error_page 500 502 503 504 /50x.html;
location = /50x.html {
}
##Or automatically detect pages that do not exist and jump to a specified page
location / {
    root /data/nginx/html/pc;
    index index.html;
    #try_files $uri $uri.html $uri/index.html /about/default.html;
    try_files $uri $uri/index.html $uri.html =489;
}

Set relaxed user upload file limits

client_max_body_size 100m; ##Set the maximum value that allows clients to upload a single file, default value is 1m, upload a file beyond which a 413 error is reported
client_body_buffer_size 1024k;Used to receive each client request message body Partial Buffer Size;Default 16 k;
client_body_temp_path /apps/nginx/client_body_temp/ 1 2 2; #Temporarily store path and subdirectory structure and number when uploading, Nginx automatically creates related directories

Set download speed limit

  • Avoid large downloads taking up bandwidth
location / {
    limit_rate_after 500k;
    limit_rate 50k;
}

Set Open File Cache

open_file_cache on; #Whether to cache open file information
open_file_cache max=10000 inactive=60s; #Maximum cached 10,000 files, inactive data timeout 60s
open_file_cache_valid 60s; #Check cache data validity every 60 seconds
open_file_cache_min_uses 5; #Accessed by hits at least 5 times in 60 seconds before being marked as active data
open_file_cache_errors on; #Cache error information

Optimize Log

  • Disable logging of page resource requests
location ~* \.(?:jpg|jpeg|gif|png|ico|woff2|js|css)$ {
access_log off;#Disable access to success logs when doing static and dynamic separation
}
  • Convert nginx logs to json logs, then use ELK for log collection, statistics and analysis
  #Note: This directive only supports http blocks, not server blocks
  log_format access_json '{"@timestamp":"$time_iso8601",'
'"host":"$server_addr",'
'"clientip":"$remote_addr",'
'"size":$body_bytes_sent,'
'"responsetime":$request_time,' #Total processing time
'"upstreamtime":"$upstream_response_time",'
'"upstreamhost":"$upstream_addr",' #Backend Application Server Processing Time
'"http_host":"$host",'
'"uri":"$uri",'
'"xff":"$http_x_forwarded_for",'
'"referer":"$http_referer",'
'"tcp_xff":"$proxy_protocol_addr",'
'"http_user_agent":"$http_user_agent",'
'"status":"$status"}';
access_log /apps/nginx/logs/access_json.log access_json;

Nginx1.9.5 and later enable HTTP2 protocol

listen 443 ssl http2;

Nginx Host Kernel Optimization

  • Mainly optimize the number of open files and tcp connection shutdown parameters of the host process where Nginx resides
fs.file-max = 1000000
#Indicates the number of handles a single process can open if it is large
net.ipv4.tcp_tw_reuse = 1
#Setting the parameter to 1 allows socket s with TIME_WAIT status to be reused for new TCP links, which is significant to the server because there are always a large number of links with TIME_WAIT status
net.ipv4.tcp_keepalive_time = 600
#The frequency at which TCP sends keepalive messages when keepalive starts;The default is 2 hours, set to 10 minutes for faster cleaning up of invalid links
net.ipv4.tcp_fin_timeout = 30
#When the server actively closes the link, the socket stays in FIN_WAIT_2 for a longer time
net.ipv4.tcp_max_tw_buckets = 5000
#Indicates a larger value for the number of TIME_WAIT sockets allowed by the operating system, beyond which the TIME_WAIT socket will be immediately cleared and a warning printed, defaulting to 8000. Too many TIME_WAIT sockets will slow down the Web server
net.ipv4.ip_local_port_range = 1024 65000
#Define the range of local ports for UDP and TCP links
net.ipv4.tcp_rmem = 10240 87380 12582912
#Defines the minimum, default, and maximum TCP acceptance cache values
net.ipv4.tcp_wmem = 10240 87380 12582912
#Define the minimum, default, and maximum TCP send cache values
net.core.netdev_max_backlog = 8096
#When a network card receives packets faster than the kernel processing speed, a queue holds the packets. This parameter indicates the larger value of the queue
net.core.rmem_default = 6291456
#Indicates that the kernel socket accepts the default cache size
net.core.wmem_default = 6291456
#Represents the default size of the kernel socket send cache
net.core.rmem_max = 12582912
#Indicates that the kernel socket acceptance cache is large or small
net.core.wmem_max = 12582912
#Indicates that the kernel socket send cache is large and small
 Note: The above four parameters need to be considered in combination with business logic and actual hardware costs
net.ipv4.tcp_syncookies = 1
#Performance independent. Used to resolve TCP SYN attacks
net.ipv4.tcp_max_syn_backlog = 8192
#This parameter represents a larger length of the SYN request queue accepted by TCP during the three-time handshake setup phase. By default 1024, setting it larger will keep Linux from losing client-initiated link requests when Nginx is too busy to accept new connections.
net.ipv4.tcp_tw_recycle = 1
#This parameter is used to set timewait fast recycling enabled
net.core.somaxconn=262114
#Option default value is 128, which is used to adjust the number of TCP connections initiated simultaneously by the system. In high concurrent requests, the default value may result in link timeouts or retransmissions, so this value needs to be adjusted in conjunction with high concurrent requests.
net.ipv4.tcp_max_orphans=262114
#The option is used to set the maximum number of TCP sockets in the system that are not associated with any user file handle. If this number is exceeded, the orphaned link will be reset immediately and output a warning message. This restriction indicates that in order to prevent simple DOS attacks, it is not necessary to rely too much on this restriction or even consider reducing this value, but rather to increase it.

PAM Resource Limit Optimization

  • Set the host to accept a high number of concurrent connections nproc, corresponding to the worker_rlimit_nofile value of nginx
vim /etc/security/limits.conf
* soft nofile 65535
* hard nofile 65535
* soft nproc 65535
* hard nproc 65535

Posted by roots on Thu, 09 Sep 2021 09:35:28 -0700