http reverse proxy configuration
Let's start with a small goal: to complete an http reverse proxy, regardless of the complexity of the configuration.
The nginx.conf configuration file is as follows:
Note: conf / nginx.conf is the default configuration file for nginx.You can also specify your profile using nginx-c
#Running User
#user somebody;
#Start a process, usually set equal to the number of cpUs
worker_processes 1;
#Global Error Log
error_log D:/Tools/nginx-1.10.1/logs/error.log;
error_log D:/Tools/nginx-1.10.1/logs/notice.log notice;
error_log D:/Tools/nginx-1.10.1/logs/info.log info;
#PID file, records the process ID of the currently started nginx
pid D:/Tools/nginx-1.10.1/logs/nginx.pid;
#Operating mode and maximum number of connections
events {
worker_connections 1024; #Single background worker process Maximum number of concurrent links for a process
}
#Set up an http server to leverage its reverse proxy capabilities to provide load balancing support
http {
#Set mime type (mail support type),Type is defined by mime.types File Definition
include D:/Tools/nginx-1.10.1/conf/mime.types;
default_type application/octet-stream;
#Set Log
log_format main '[$remote_addr] - [$remote_user] [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log D:/Tools/nginx-1.10.1/logs/access.log main;
rewrite_log on;
#sendfile Instruction Specification nginx Whether to call sendfile Function ( zero copy Mode) to output the file, for normal applications,
#Must be set to on,If used for application disks such as download IO Heavy load application, can be set to off,To balance disks with networks I/O Processing speed, reducing system's uptime.
sendfile on;
#tcp_nopush on;
#Connection timeout
keepalive_timeout 120;
tcp_nodelay on;
#gzip compression switch
#gzip on;
#Set the actual server list
upstream zp_server1{
server 127.0.0.1:8089;
}
#HTTP Server
server {
#Listen on port 80, which is a well-known port number for the HTTP protocol
listen 80;
#Define access using www.xx.com
server_name www.helloworld.com;
#home page
index index.html
#Directory pointing to webapp
root D:\01_Workspace\Project\github\zp\SpringNotes\spring-security\spring-shiro\src\main\webapp;
#Encoding Format
charset utf-8;
#Agent configuration parameters
proxy_connect_timeout 180;
proxy_send_timeout 180;
proxy_read_timeout 180;
proxy_set_header Host $host;
proxy_set_header X-Forwarder-For $remote_addr;
#Path to reverse proxy (bound to upstream), location Path to map set after
location / {
proxy_pass http://zp_server1;
}
#Static file, nginx handles itself
location ~ ^/(images|javascript|js|css|flash|media|static)/ {
root D:\01_Workspace\Project\github\zp\SpringNotes\spring-security\spring-shiro\src\main\webapp\views;
#Be overdue30Days, static files are not updated much, expiration can be set a little larger, if updated frequently, can be set a little smaller.
expires 30d;
}
#Set an address to view Nginx status
location /NginxStatus {
stub_status on;
access_log on;
auth_basic "NginxStatus";
auth_basic_user_file conf/htpasswd;
}
#No access .htxxx file
location ~ /\.ht {
deny all;
}
#Error handling page (optional configuration)
#error_page 404 /404.html;
#error_page 500 502 503 504 /50x.html;
#location = /50x.html {
# root html;
#}
}
}
Okay, let's try it:
- Start webapp, note that the port on which to start binding is the same as the port set by upstream in nginx.
-
Change host: Add a DNS record to the host file in the C:\Windows\System32\driversetc directory
127.0.0.1 www.helloworld.com
- startup.bat command in the previous section
-
If you visit www.helloworld.com in your browser, it should be no surprise that it is already accessible.
Load Balance Configuration
In the previous example, the proxy pointed to only one server.
However, during the actual operation of the website, most of the servers are running the same app, which requires the use of load balancing to shunt.
nginx also provides simple load balancing capabilities.
Suppose an application scenario in which the application is deployed on three servers in a linux environment, 192.168.1.11:80, 192.168.1.12:80, and 192.168.1.13:80.The website domain name is www.helloworld.com and the public IP is 192.168.1.11.Deploy nginx on the server where the network IP resides and load balance all requests.
The nginx.conf configuration is as follows:
http {
#Set the mime type, defined by the mime.type file
include /etc/nginx/mime.types;
default_type application/octet-stream;
#Set Log Format
access_log /var/log/nginx/access.log;
#Set load balancing server list
upstream load_balance_server {
#The weigth parameter represents the weight, and the higher the weight, the greater the probability of being assigned
server 192.168.1.11:80 weight=5;
server 192.168.1.12:80 weight=1;
server 192.168.1.13:80 weight=6;
}
#HTTP Server
server {
#Listen on port 80
listen 80;
#Define access using www.xx.com
server_name www.helloworld.com;
#Load balancing requests for all requests
location / {
root /root; #Define the default site root location for the server
index index.html index.htm; #Define the name of the index file on the first page
proxy_pass http://load_balance_server ;#Request to go to the list of servers defined by load_balance_server
#The following are some of the reverse proxy configurations (optional configurations)
#proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
#Back-end Web server can get users'real IP through X-Forwarded-For
proxy_set_header X-Forwarded-For $remote_addr;
proxy_connect_timeout 90; #nginx connection timeout with back-end server (proxy connection timeout)
proxy_send_timeout 90; #Backend Server Data Return Time (Proxy Send Timeout)
proxy_read_timeout 90; #Backend server response time (proxy receive timeout) after successful connection
proxy_buffer_size 4k; #Set the size of the buffer where the proxy server (nginx) holds user header information
proxy_buffers 4 32k; #proxy_buffers, if the average page size is less than 32k, set this
proxy_busy_buffers_size 64k; #Buffer size under high load (proxy_buffers*2)
proxy_temp_file_write_size 64k; #Set the cache folder size, greater than this value, to be passed from the upstream server
client_max_body_size 10m; #Maximum number of single file bytes allowed for client requests
client_body_buffer_size 128k; #Buffer Agent Buffers Maximum Bytes of Client Requests
}
}
}
Web site has multiple webapp configurations
When a website has more and more functions, it is often necessary to separate out some relatively independent modules for independent maintenance.In this case, there are usually multiple webapp s.
Example: suppose the www.helloworld.com site has several webapp s, finance, product, admin.The way these applications are accessed is differentiated by context:
www.helloworld.com/finance/
www.helloworld.com/product/
www.helloworld.com/admin/
We know that the default port number for http is 80. If you start all three web app applications on one server at the same time, using 80 ports is certainly not possible.Therefore, the three applications need to bind different port numbers.
So the problem is that when a user actually visits the www.helloworld.com site and visits a different webapp, he or she will not always visit it with the corresponding port number.So again, you need to use a reverse proxy to do this.
Configuration is not difficult, let's see how to do it:
http {
#Some basic configurations are omitted here
upstream product_server{
server www.helloworld.com:8081;
}
upstream admin_server{
server www.helloworld.com:8082;
}
upstream finance_server{
server www.helloworld.com:8083;
}
server {
#Some basic configurations are omitted here
#server pointing to product s by default
location / {
proxy_pass http://product_server;
}
location /product/{
proxy_pass http://product_server;
}
location /admin/ {
proxy_pass http://admin_server;
}
location /finance/ {
proxy_pass http://finance_server;
}
}
}
https reverse proxy configuration
Some sites with high security requirements may use HTTPS, a secure HTTP protocol that uses the ssl communication standard.
There is no popular HTTP protocol or SLS standard.However, there are a few things you need to know to configure https with nginx:
- The fixed port number of HTTPS is 443, which is different from 80 ports of HTTP
- The SSL standard requires the introduction of a security certificate, so you need to specify the certificate and its corresponding key in nginx.conf
Other reverse proxies are basically the same as http, except that they are somewhat different in the Server section.
#HTTP Server
server {
#Listen on port 443.443 is a well-known port number and is mainly used for the HTTPS protocol
listen 443 ssl;
#Define access using www.xx.com
server_name www.helloworld.com;
#ssl certificate file location (common certificate file format: crt/pem)
ssl_certificate cert.pem;
#ssl certificate key location
ssl_certificate_key cert.key;
#ssl configuration parameters (optional configuration)
ssl_session_cache shared:SSL:1m;
ssl_session_timeout 5m;
#Digital signature, using MD5 here
ssl_ciphers HIGH:!aNULL:!MD5;
ssl_prefer_server_ciphers on;
location / {
root /root;
index index.html index.htm;
}
}
Static Site Configuration
Sometimes we need to configure static sites (that is, html files and a bunch of static resources).
For example: if all the static resources are in the /app/dist directory, we only need to specify the home page and the host is fine.
The configuration is as follows:
worker_processes 1;
events {
worker_connections 1024;
}
http {
include mime.types;
default_type application/octet-stream;
sendfile on;
keepalive_timeout 65;
gzip on;
gzip_types text/plain application/x-javascript text/css application/xml text/javascript application/javascript image/jpeg image/gif image/png;
gzip_vary on;
server {
listen 80;
server_name static.zp.cn;
location / {
root /app/dist;
index index.html;
#Forward any request to index.html
}
}
}
Then add HOST:
127.0.0.1 static.zp.cn
At this point, you can access the static site by accessing static.zp.cn in your local browser.