Common configuration items
In our work, we deal with Nginx more through its configuration files. It is necessary to understand the respective roles of these configuration items.
First, the content of nginx.conf is usually as follows:
... ... #Core touch block events { #Event module ... } http { # http module server { # Block server location [PATTERN] { # Block location ... } location [PATTERN] { ... } } server { ... } } mail { # mail module server { # Block server ... } }
Let's take a look at the general configuration items for each module in turn:
Core module
user admin; #Configure users or groups. worker_processes 4; #The number of processes allowed to be generated by default is 1 pid /nginx/pid/nginx.pid; #Specify the storage address of nginx process run files error_log log/error.log debug; #Error log path, level.
Event module
events { accept_mutex on; #Set up network connection serialization to prevent panic, default is on multi_accept on; #Sets whether a process accepts multiple network connections at the same time, defaulting to off use epoll; #Event-driven model select|poll|kqueue|epoll|resig worker_connections 1024; #Maximum number of connections, default 512 }
http module
http { include mime.types; #File extension and file type mapping table default_type application/octet-stream; #Default file type, text/plain by default access_log off; #Cancel Service Log sendfile on; #Allow sendfile mode to transfer files, default to off, can be in http block, server block, location block. sendfile_max_chunk 100k; #The number of transfers per process call should not be greater than the set value. By default, there is no upper limit. keepalive_timeout 65; #Connection timeout, default is 75s, can be in http, server, location block. server { keepalive_requests 120; #Maximum number of single connection requests. listen 80; #Monitor port server_name 127.0.0.1; #Monitor address index index.html index.htm index.php; root your_path; #root directory location ~ \.php$ { fastcgi_pass unix:/var/run/php/php7.1-fpm.sock; #fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; include fastcgi_params; } } }
Configuration item resolution
- worker_processes
worker_processes is used to set the number of processes for the Nginx service. This value recommends the number of CPU kernels.
- worker_cpu_affinity
The worker_cpu_affinity is used to allocate the CPU's working kernel for each process. The parameters are represented by multiple binary values. Each group represents a process, and each of the groups represents the situation in which the process uses the CPU, 1 represents the use, and 0 represents the non-use. So we use worker_cpu_affinity 0001 0010 0100 1000; to bind processes to different cores. By default, worker processes are not bound to any CPU.
- worker_rlimit_nofile
Set the maximum number of file openings for each process. If not, the upper limit is the number ulimit-n of the system, which is generally 65535.
- worker_connections
Setting the maximum number of connections allowed by a process theory, the bigger the number, the better, but not more than the value of worker_rlimit_nofile.
- use epoll
Set up the event-driven model using epoll. Epoll is one of the high performance event-driven libraries supported by Nginx. It is recognized as an excellent event-driven model.
- accept_mutex off
Close the network connection serialization, when it is set to open, it will serialize the acceptance connections of multiple Nginx processes to prevent multiple processes from competing for connections. When the number of server connections is small, turning on this parameter will reduce the load to a certain extent. However, when the server throughput is high, please close this parameter for efficiency; and when you close this parameter, you can also make the request allocation among multiple worker s more balanced. So we set accept_mutex off;
- multi_accept on
Setting up a process to accept multiple network connections at the same time
- Sendfile on
Sendfile is a system call introduced after Linux 2.0. It can simplify the steps of network transmission and improve server performance.
Traditional network transmission without sendfile:
Hard Disk > Kernel buffer > User buffer > Kernel socket buffer > Protocol stack
sendfile() is used for network transmission.
Hard Disk > Kernel Buffer > Protocol Stack
- tcp_nopush on;
Setting up data packages will accumulate and then transmit together, which can improve some transmission efficiency. tcp_nopush must be used with sendfile.
- tcp_nodelay on;
Small packets do not wait for direct transmission. The default is on. Looks like the opposite function of tcp_nopush, but nginx can also balance the use of these two functions when both sides are on.
- keepalive_timeout
The duration of an HTTP connection. Setting too long can make useless threads too many. This is based on the number of server access, processing speed and network conditions.
- send_timeout
Set the timeout of the Nginx server to respond to the client. This timeout is only for the time between an activity after the connection between the two clients and the server is established. If the client has no activity after this time, the Nginx server will close the connection.
- gzip on
gzip is enabled to compress the response data online and in real time to reduce the amount of data transmission.
- gzip_disable "msie6"
In response to these kinds of client requests, Nginx server does not use Gzip function to cache application data, and gzip_disable "msie6" does not compress the data of IE6 browser.
The commonly used configuration items are roughly these. For different business scenarios, some need additional configuration items, which are not expanded here.
Other
There is location in the http configuration, which is used to match the corresponding processing rules according to the uri in the request.
location lookup rule
location = / { # Exact matching /, no strings after hostname [ config A ] } location / { # Because all addresses start with / this rule matches all requests # But the regular and longest strings match first [ config B ] } location /documents/ { # Match any address that starts with / documents / and continue searching after matching # This will only be used if the latter regular expression does not match. [ config C ] } location ~ /documents/Abc { # Match any address that starts with / documents/Abc, and continue searching after matching # This will only be used if the latter regular expression does not match. [ config CC ] } location ^~ /images/ { # Match any address that starts with / images / and stop searching for the rule downwards after matching matches. Use this rule. [ config D ] } location ~* \.(gif|jpg|jpeg)$ { # Match all requests ending with gif,jpg, or jpeg # However, all requests / images / images will be processed by config D because ^~cannot reach this rule. [ config E ] } location /images/ { # Character matching to / images /, continue down, you will find that ^~exists [ config F ] } location /images/abc { # The longest character matches to / images/abc, and as you go on, you will find that ^~exists # The placement order of F and G is irrelevant. [ config G ] } location ~ /images/abc/ { # Only by removing config D can it be effective: first, the longest matching address at the beginning of config G, continue to search down, match this rule, and adopt [ config H ] }
The order of regular lookup priority from high to low is as follows:
"=" begins with an exact match, such as a request in A that matches only the end of the root directory, followed by no string.
"^ ~" begins with a uri that begins with a regular string, not a regular match
The beginning of "~" denotes case-sensitive regular matching.
The beginning of "~*" denotes case-insensitive regular matching
"/" Universal matching, if there is no other matching, any request will match to
Load balancing configuration
Nginx load balancing requires upstream module, which can be achieved through the following configuration:
upstream test-upstream { ip_hash; # Distribution using ip_hash algorithm server 192.168.1.1; # ip to be allocated server 192.168.1.2; } server { location / { proxy_pass http://test-upstream; } }
The above example defines a load balancing configuration for test-upstream, which forwards requests to the module for allocation processing through the proxy_pass reverse proxy instruction.