Nginx Learning Reorganization

Keywords: Web Server Nginx Linux firewall network

Installation and Configuration of Nginx

0 Installation

Slightly, download it on the official website.Note that Nginx is written in C and is not cross-platform. Different operating systems need to download different Nginx.

Nginx Basic Commands under 1 Windows

start-up
start nginx

Close
./nginx -s quit
./nginx -s stop

see information
./nginx -V

restart
./nginx -s reload

Nginx Basic Commands under 2 Linxu

start-up
./sbin/nginx

Close
./sbin/nginx -s quit
./sbin/nginx -s stop

see information
./sbin/nginx -V
./sbin/nginx -v

restart
./sbin/nginx -s reload

Installation of 3 Nginx under CentOS 7

Nginx Of Linux Versions and Windows Versions differ in that they only provide source code, and the steps to compile require the user to do them.

step 1 - Use CentOS Of yum Functional download compilation environment
    yum -y install make zlib zlib-devel gcc-c++ libtool  openssl openssl-devel
step 2 - Enter Nginx Directory, enter ./configure Configure
    No 1 - If permission issues arise, use instead bash ./configure
    No 2 - To install to a specific directory, use ./configure --prefix=[Route]
step 3 - input make
step 4 - input make install
step 5 - By default, this defaults to /usr/local A directory appears nginx Catalog(If configure Time to modify the path then it won't be here)
step 6 - Enter under this directory sbin Catalog, operate with related commands Nginx
step 7 - Note for Nginx To use the port, you need to release the firewall
    No 1 - If you are a server in Ali Cloud, you can configure port rules
    No 2 - If you have your own virtual machine or physical server, you need to open your own firewall port
        [1] firewall-cmd --zone=public --add-port=80/tcp --permanent
        [2] firewall-cmd --reload

Cross-domain Backend Matching Code

If you use Nginx as a static resource server to do front-end and back-end separation, then cross-domain issues of front-end pages are involved.java is based on Spring to set cross-domain permissions:

import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import org.springframework.web.servlet.config.annotation.CorsRegistry;
import org.springframework.web.servlet.config.annotation.WebMvcConfigurer;
import org.springframework.web.servlet.config.annotation.WebMvcConfigurerAdapter;

/**
 * Configuration classes for solving cross-domain problems
 * Let Spring scan to this class
 */
@Configuration
public class CorsConfig{
    //Allowed Http connection types
    static final String ORIGINS[] = new String[] { "GET", "POST", "PUT", "DELETE" };

    @Bean
    public WebMvcConfigurer corsConfigurer() {
        return new WebMvcConfigurer() {
            @Override
            public void addCorsMappings(CorsRegistry registry) {
                //Set all paths to allow cross-domain requests
                registry.addMapping("/**").allowedOrigins("*").allowCredentials(true).allowedMethods(ORIGINS)
                        .maxAge(3600).allowCredentials(true);
            }
        };
    }
}

Some Conceptual Reflections on Nginx

What is Nginx

Nginx is an event-driven Web server mainly used as Http, Https.
Nginx has two main functions - reverse proxy and load balancing.
In addition, it has good scalability and supports many third-party libraries.
It is lighter than Apache and supports hot updates.

How Nginx achieves high performance

Nginx has one main thread and multiple Worker threads at work; the main thread maintains the process existence and reads the configuration, and the Worker thread processes the Request.
When Request comes in, there is a Worker thread to process, and if Request is blocked, the Worker thread is temporarily called to process other Requests.
That is, Nginx can be thought of as processing requests asynchronously, with multiple Requst s per thread, which makes Nginx very good at io-intensive work.

Nginx Block Request

server { 
     listen 80; 
     server_name  ""; 
     return 444;
 }

Nginx Solves Panic Phenomena

Panic phenomenon means that multiple threads compete for the same resource, resulting in a waste of resources.In Nginx, multiple threads listen on a port together.
Nginx locks the port, which is monitored by only one Worker thread at a time.

Nginx Large File Upload

In the actual web development process, you may encounter the need to upload large files.In this case, if you use Nginx as a static resource server, you will encounter a series of issues that require you to adjust your configuration.

How Nginx works:

      (Server-side phase) (Reverse proxy phase)
Browser --> Nginx --> java project

When a browser accesses Nginx, Nginx exists as a server, and in the configuration file for Nginx, client_xxxx is used to configure this phase of the configuration.
When Nginx accesses a back-end java project, Nginx exists as a client and is proxy_xxxx in the configuration file for configuration.

The corresponding nginx.conf file in Nginx is regulated in the http module:

# Maximum amount of front-end to back-end transmission when used primarily to transfer attachments
client_max_body_size  500m;
# The size of the buffer passed in from the browser directly affects the speed of file transfer
client_body_buffer_size 100m;
# Buffer size of the request header, if less than this number, the configuration item is used to read the request header
client_header_buffer_size 16k;
# If the request header exceeds the number of client_header_buffer_size configurations, it will be read using large_client_header_buffers
large_client_header_buffers 4 32k;


# Timeout for establishing a connection to Nginx in seconds
proxy_connect_timeout 60s;
# Timeout in seconds for Nginx to connect to upstream server
proxy_send_timeout 60s;
# The response time, in seconds, returned to Nginx after the Http request has been processed by the backend item
proxy_read_timeout 60s;

# proxy_buffers and proxy_busy_buffers_size only work when proxy_buffering is turned on
proxy_buffering on;
# Number and value of Request s sent by Nginx to back-end projects
proxy_buffers 4 8k;

If the backend uses SpringBoot 2.0, you also need to add the following configuration in application.properties:

spring.servlet.multipart.max-file-size=500MB
spring.servlet.multipart.max-request-size=500MB

Configuration File Integration for Nginx

There is a nginx.conf under the conf folder in the Nginx directory.The contents are as follows:

# Set up logon users
# If you do not set up a login user on Linux, you may not have permission to access static resources
# user root;

# Nginx process, typically set to the same number of cpu cores
worker_processes  8;


# Maximum number of file descriptors opened by a single process
# It's best to have the same number of file descriptors as Linux
# View number of file descriptors on Linux (ulimit-n)
worker_rlimit_nofile 65535;

#Error Log Storage Directory 
error_log  logs/error.log;

# Storage file for pid
# Default under / log if not set
pid nginx.pid;


# Used to specify the working mode and maximum number of connections for Nginx
events {

    # The working modes are select, poll, epoll, kqueue, etc. The first two are standard modes, the second two are efficient modes
    # On Linux, epoll is generally the working mode choice
    # On Windows, you can choose poll
    use epoll; 

    # Maximum number of connections per process, limited by maximum number of handles on Linux
    # Note that Nginx is multiprocess
    worker_connections  1024; 

    # Set up network connection serialization to prevent shock
    # On for on, off for off, on by default
    accept_mutex on;

    # Set whether a process accepts multiple network connections at the same time
    # On for on, off for off, off by default
    multi_accept on;
}

# Core Configuration Modules
http {

    # File extension and file type mapping table
    include       mime.types;

    # Default file type, text/plain by default
    default_type  application/octet-stream;

    # Custom format for logs
    log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '
                      '$status $body_bytes_sent "$http_referer" '
                      '"$http_user_agent" "$http_x_forwarded_for"';

    # Service Log
    access_log  logs/access.log  main;

    # Encoding Format
    charset  utf-8;
    
    # Turn on tcp_nopush to publish the HttpResponse Header and the beginning of the file in one file, reducing the number of network message segments
    tcp_nopush  on;

    # Turn on tcp_nodelay, and the kernel will wait for more bytes to form a packet, improving IO performance
    tcp_nodelay on;

    # Whether to use sendfile system call to transfer files
    # sendfile is a new IO system (zero-copy IO system) in Linux 2.0
    sendfile  on;

    # Maximum size of data transferred per call to sendfile() by the Nginx worker process
    # The default value is 0 (meaning unlimited) and can be set in the http/server/location module
    sendfile_max_chunk  1024k;

    # Connection timeout in seconds
    keepalive_timeout  65s;

    # Maximum number of requests per long connection, default 100
    keepalive_requests 100;

    # Turn on gzip compression
    gzip  on;

    # Maximum amount of front-end to back-end transmission when used primarily to transfer attachments
    client_max_body_size  500m;

    # The size of the buffer passed in from the browser directly affects the speed of file transfer
    client_body_buffer_size 100m;

    # Buffer size of the request header, if less than this number, the configuration item is used to read the request header
    client_header_buffer_size 16k;

    # If the request header exceeds the number of client_header_buffer_size configurations, it will be read using large_client_header_buffers
    large_client_header_buffers 4 32k;

    # Timeout in seconds for backend server to establish connection with Nginx
    proxy_connect_timeout 60s;

    # Timeout in seconds for Nginx to connect to upstream server
    proxy_send_timeout 60s;

    # The response time, in seconds, returned to Nginx after the Http request has been processed by the back-end server
    proxy_read_timeout 60s;

    # proxy_buffers and proxy_busy_buffers_size only work when proxy_buffering is turned on
    proxy_buffering on;

    # Number and value of Request s sent by Nginx to back-end projects
    proxy_buffers 4 8k;


    # One of the main functions of Nginx - service distribution as a gateway
    # First you need to configure the port and url to listen on in the server module
    server {

        # Listening Port
        listen  8090;

        # Name of the service
        server_name  hello_world;

        # Encoding Format
        #charset gkb;
        charset utf-8;

        # Set the bandwidth available on a single connection in k - kb, m - mb
        limit_rate 500k;


        # Setting the wrong page for different error codes
        # error_page  404              /404.html;
        error_page   500 502 503 504  /50x.html;

        # If a client accesses http://ip:8090/, it is equivalent to accessing http://ip:8091/
        location / {
            proxy_pass http://localhost:8091/;
        }

        # If a client accesses http://ip:8090/api here, it is equivalent to accessing http://ip:8092/
        location /api {
            proxy_pass http://localhost:8092/;
        }

        # Load Balancing Required
        location /api2 {

            # Nginx returns HTTP 408 (Request Timed Out) if the client does not send a complete request header within the specified time
            client_header_timeout 60s;

            # If the client does not send any request body content within the specified time, Nginx returns HTTP 408 (Request Timed Out)
            client_body_timeout 60s;

            # Timeout for server-side data transfer to client
            send_timeout 60s;

            # retry count
            proxy_next_upstream_tries 3;

            # Timeout for Nginx connection to back-end business server
            proxy_connect_timeout 60s;

            # Read timeout for Nginx connection to back-end business server
            proxy_read_timeout 60s;

            # Storage timeout for Nginx connection to back-end business server
            proxy_send_timeout 60s;

            # If one of the back-end services of the load has issues such as 502, 504, retry to the next one
            proxy_next_upstream http_500 http_501 http_502 http_503 http_504 error timeout invalid_header;

            # Whether to turn on aio, turn off by default
            aio on;

            # When forwarding, place fields such as Host, X-Real-IP, X-Forwarded-For in the Header of the original http request in the forwarded request
            proxy_set_header Host  $host;
            proxy_set_header X-Real-IP $remote_addr;
            proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;

            # Set forwarding address
            proxy_pass http://hello_word_service/;
        }
    }

    # Configuring multiple services for load balancing purposes
    upstream hello_word_service {
        # Load balancing scenario one is distributed by weight
        server  localhost:8091 weight=5;
        server  localhost:8092 weight=5;

        # Load balancing scenario two is allocated according to response time
        # server  localhost:8091;
        # server  localhost:8092;
        # fair;

        # Load balancing scheme three is allocated according to the hash value of ip
        # Note that the Hash value for the same ip is fixed, so it must be assigned to the same server here
        # server  localhost:8091;
        # server  localhost:8092;
        # ip_hash;

        # Other options are not listed
    }




    # Two of the main functions of Nginx - Calling back-end interfaces as a static resource server
    # First you need to configure static resources in the server module
    server {

        # Listening Port
        listen  8100;

        # Name of the service
        server_name  demo_web;

        # Encoding Format
        charset utf-8;

        # Home page configuration, note that the read path must have a configured index.html or index.htm
        # If the home page is other, configure it here
        # If the client accesses http://ip:8100, the home page is displayed
        index index.html index.htm;

        # Read path of static resource, first read home page under path
        root D:\\demo_web;

        # If a client or page resource needs access to http://ip:8100/api, it is equivalent to accessing http://ip:8080/
        location /api {
            proxy_pass http://localhost:8080/;
        }
        
        # Load Balancing Required
        location /api2 {

            proxy_next_upstream http_500 http_501 http_502 http_503 http_504 error timeout invalid_header;
            proxy_set_header Host  $host;
            proxy_set_header X-Real-IP $remote_addr;
            proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;

            # Set forwarding address
            proxy_pass http://demo_web_service/;
        }
    }

    # Configure Service Address
    upstream demo_web_service {
        server  localhost:8080;
    }






    # Third of Nginx's Main Functions - Forward Agent
    # The forward proxy's Nginx is used directly as a forwarder to the external network url
    server {

        # Listening Port
        listen  80;

        # Name of the service
        server_name  base_proxy;

        # Encoding Format
        charset utf-8;

        location / {
            proxy_pass http://baidu.com;
        }
    }



    # If you think the profile is too long, you can use include to introduce other profiles
    # include servers.conf;
}

Posted by Naez on Tue, 19 Nov 2019 19:33:30 -0800