Getting started with nginx Progress Controller

Keywords: Web Server Nginx Tomcat Kubernetes inotify

What is Ingress?

In the Kubernetes cluster, Ingress is a rule set that authorizes inbound connections to reach the cluster service, providing seven layers of load balancing capability. It can provide external accessible url, load balancing, SSL, name based virtual host, etc. for Ingress configuration. In short, Ingress says a set of rules that can implement the routing function from url to service in Kubernetes. Since Ingress is just rules, what are the concrete implementation of these rules? It is implemented by Ingres controller. At present, nginx Ingres controller is more commonly used.

It can be understood that nginx - Ingress - controller is an nginx application. What can it do? It can proxy the back-end service. It can translate the corresponding configuration into nginx application configuration according to the Ingress configuration to realize the seven layer routing function. Since the nginx-ingress-controller is an application similar to the gateway, my nginx-ingress-controller application itself needs to be accessible outside the cluster. Then I need to expose the nginx-ingress-controller application externally. In k8s, the nginx-Ingress-con is exposed by creating a LoadBalancer service: nginx-ingress-lb. Troller is an application. Correspondingly, we know that the external access of nginx-ingress-controller is through the SLB (load balancing product) associated with nginx-ingress-lb service. As for the corresponding SLB configuration strategy, please refer to the previous article about service implementation.

The simple request link is as follows:

Client -- > SLB -- > nginx-ingress-lb Service -- > nginx-ingress-controller pod -- > app Service -- > app pod

If we use Ingress to expose services, we need to create corresponding resources. In order to function normally, the pod of nginx-ingress-controller needs to operate normally, the nginx-ingress-lb service and slb monitoring configuration need to be normal, and the backend application services that Ingress needs to be associated with also need to be configured correctly, including the application pod running normally and the application service configuration.

Then we need to create the corresponding Ingress to implement our requirements. Let's create a simple Ingress to see what the specific functions look like. The purpose of our progress is to achieve: request domain name: ingress.test.com, and the actual request will be sent to the tomcat application at the back end.

Related configuration:

apiVersion: apps/v1beta2
kind: Deployment
metadata:
  labels:
    app: tomcat
  name: tomcat
  namespace: default
spec:
  progressDeadlineSeconds: 600
  replicas: 1
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      app: tomcat
  strategy:
    rollingUpdate:
      maxSurge: 25%
      maxUnavailable: 25%
    type: RollingUpdate
  template:
    metadata:
      labels:
        app: tomcat
    spec:
      containers:
        - image: 'tomcat:latest'
          imagePullPolicy: Always
          name: tomcat
          resources:
            requests:
              cpu: 100m
              memory: 200Mi
          terminationMessagePath: /dev/termination-log
          terminationMessagePolicy: File
      dnsPolicy: ClusterFirst
      restartPolicy: Always
      schedulerName: default-scheduler
      securityContext: {}
      terminationGracePeriodSeconds: 30
---
apiVersion: v1
kind: Service
metadata:
  name: tomcat-svc
  namespace: default
spec:
  clusterIP: 172.21.6.143
  ports:
    - port: 8080
      protocol: TCP
      targetPort: 8080 #Where there are common configuration errors in service, the targetPort must be the port exposed by pod, not other ports.
  selector:
    app: tomcat
  sessionAffinity: None
  type: ClusterIP
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: tomcat
  namespace: default
spec:
  rules:
    - host: ingress.test.com
      http:
        paths:
          - backend:
              serviceName: tomcat-svc
              servicePort: 8080
            path: /

An endpoint IP will be generated automatically after Ingress is created successfully. We should make A record of the domain name: ingress.test.com and resolve it to this endpoint IP. In this way, we visit the domain name: ingress.test.com, and the actual request will be requested to our tomcat application.

Test results:

#curl http: / / endpoint IP -H "host:ingress.test.com" -I
HTTP/1.1 200 
Date: Thu, 26 Sep 2019 04:55:39 GMT
Content-Type: text/html;charset=UTF-8
Connection: keep-alive
Vary: Accept-Encoding

Configuration analysis of nginx-ingress-controller

yaml is as follows:

apiVersion: apps/v1beta2
kind: Deployment
metadata:
  labels:
    app: ingress-nginx
  name: nginx-ingress-controller
  namespace: kube-system
spec:
  progressDeadlineSeconds: 600
  replicas: 2
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      app: ingress-nginx
  strategy:
    rollingUpdate:
      maxSurge: 25%
      maxUnavailable: 25%
    type: RollingUpdate
  template:
    metadata:
      annotations:
        prometheus.io/port: '10254'
        prometheus.io/scrape: 'true'
      labels:
        app: ingress-nginx
    spec:
      affinity:
        podAntiAffinity:
          preferredDuringSchedulingIgnoredDuringExecution:
            - podAffinityTerm:
                labelSelector:
                  matchExpressions:
                    - key: app
                      operator: In
                      values:
                        - ingress-nginx
                topologyKey: kubernetes.io/hostname
              weight: 100
      containers:
        - args:
            - /nginx-ingress-controller
            - '--configmap=$(POD_NAMESPACE)/nginx-configuration'
            - '--tcp-services-configmap=$(POD_NAMESPACE)/tcp-services'
            - '--udp-services-configmap=$(POD_NAMESPACE)/udp-services'
            - '--annotations-prefix=nginx.ingress.kubernetes.io'
            - '--publish-service=$(POD_NAMESPACE)/nginx-ingress-lb'
            - '--v=2'
          env:
            - name: POD_NAME
              valueFrom:
                fieldRef:
                  apiVersion: v1
                  fieldPath: metadata.name
            - name: POD_NAMESPACE
              valueFrom:
                fieldRef:
                  apiVersion: v1
                  fieldPath: metadata.namespace
          image: >-
            registry-vpc.cn-shenzhen.aliyuncs.com/acs/aliyun-ingress-controller:v0.22.0.5-552e0db-aliyun
          imagePullPolicy: IfNotPresent
          livenessProbe:
            failureThreshold: 3
            httpGet:
              path: /healthz
              port: 10254
              scheme: HTTP
            initialDelaySeconds: 10
            periodSeconds: 10
            successThreshold: 1
            timeoutSeconds: 1
          name: nginx-ingress-controller
          ports:
            - containerPort: 80
              name: http
              protocol: TCP
            - containerPort: 443
              name: https
              protocol: TCP
          readinessProbe:
            failureThreshold: 3
            httpGet:
              path: /healthz
              port: 10254
              scheme: HTTP
            periodSeconds: 10
            successThreshold: 1
            timeoutSeconds: 1
          resources: {}
          securityContext:
            capabilities:
              add:
                - NET_BIND_SERVICE
              drop:
                - ALL
            procMount: Default
            runAsUser: 33
          terminationMessagePath: /dev/termination-log
          terminationMessagePolicy: File
          volumeMounts:
            - mountPath: /etc/localtime
              name: localtime
              readOnly: true
      dnsPolicy: ClusterFirst
      initContainers:
        - command:
            - /bin/sh
            - '-c'
            - |
              sysctl -w net.core.somaxconn=65535
              sysctl -w net.ipv4.ip_local_port_range="1024 65535"
              sysctl -w fs.file-max=1048576
              sysctl -w fs.inotify.max_user_instances=16384
              sysctl -w fs.inotify.max_user_watches=524288
              sysctl -w fs.inotify.max_queued_events=16384
          image: 'registry-vpc.cn-shenzhen.aliyuncs.com/acs/busybox:latest'
          imagePullPolicy: Always
          name: init-sysctl
          resources: {}
          securityContext:
            privileged: true
            procMount: Default
          terminationMessagePath: /dev/termination-log
          terminationMessagePolicy: File
      nodeSelector:
        beta.kubernetes.io/os: linux
      restartPolicy: Always
      schedulerName: default-scheduler
      securityContext: {}
      serviceAccount: nginx-ingress-controller
      serviceAccountName: nginx-ingress-controller
      terminationGracePeriodSeconds: 30
      volumes:
        - hostPath:
            path: /etc/localtime
            type: File
          name: localtime
---
apiVersion: v1
kind: Service
metadata:
  labels:
    app: nginx-ingress-lb
  name: nginx-ingress-lb
  namespace: kube-system
spec:
  clusterIP: 172.21.11.181
  externalTrafficPolicy: Local
  healthCheckNodePort: 32435
  ports:
    - name: http
      nodePort: 31184
      port: 80
      protocol: TCP
      targetPort: 80
    - name: https
      nodePort: 31972
      port: 443
      protocol: TCP
      targetPort: 443
  selector:
    app: ingress-nginx
  sessionAffinity: None
  type: LoadBalancer

The configuration that needs special attention is the args configuration of the container:

  • --configmap = $(pod_namespace) / nginx configuration indicates which namespace is used by nginx address controller to read nginx configuration of nginx address controller. By default, Kube system / nginx configuration is used.
  • --Publish Service = $(pod_namespace) / nginx Ingress LB indicates which extension IP of the LoadBalancer's Service is the endpoint address of the Ingres selected to use nginx Ingress controller. By default, Kube system / nginx Ingress Lb is used.
  • --The configuration of ingress class = ingress · class is an identification of nginx-ingress-controller itself, which indicates who I am. No configuration is the default "nginx". What's the use of this configuration? It is used to let ingress choose who I want to use the ingress controller. Ingress decides which one to choose through the annotation: kubernetes.io/ingress.class: "". If there is no configuration, then it is to choose -- ingress class = "nginx" this ingress controller.

How to deploy multiple sets of Nginx Ingress Controller in an Alibaba cloud kubernetes cluster reference document:

https://yq.aliyun.com/articles/645856

We also said that Ingress is a rule and will be distributed to the address controller to implement related functions. Let's see what the configuration of distribution to the address controller is like. We can go to the pod of nginx express controller to check the nginx configuration. In / etc/nginx/nginx.conf in pod, in addition to some public configurations, the above Ingress generates the following nginx.conf configurations:

## start server ingress.test.com
    server {
        server_name ingress.test.com ;
        
        listen 80;
        
        set $proxy_upstream_name "-";
        
        location / {
            
            set $namespace      "default";
            set $ingress_name   "tomcat";
            set $service_name   "tomcat-svc";
            set $service_port   "8080";
            set $location_path  "/";
            
            rewrite_by_lua_block {
                balancer.rewrite()
            }
            
            access_by_lua_block {
                balancer.access()
            }
            
            header_filter_by_lua_block {
                
            }
            body_filter_by_lua_block {
                
            }
            
            log_by_lua_block {
                
                balancer.log()
                
                monitor.call()
                
            }
            
            port_in_redirect off;
            
            set $proxy_upstream_name    "default-tomcat-svc-8080";
            set $proxy_host             $proxy_upstream_name;
            
            client_max_body_size                    100m;
            
            proxy_set_header Host                   $best_http_host;
            
            # Pass the extracted client certificate to the backend
            
            # Allow websocket connections
            proxy_set_header                        Upgrade           $http_upgrade;
            
            proxy_set_header                        Connection        $connection_upgrade;
            
            proxy_set_header X-Request-ID           $req_id;
            proxy_set_header X-Real-IP              $the_real_ip;
            
            proxy_set_header X-Forwarded-For        $the_real_ip;
            
            proxy_set_header X-Forwarded-Host       $best_http_host;
            proxy_set_header X-Forwarded-Port       $pass_port;
            proxy_set_header X-Forwarded-Proto      $pass_access_scheme;
            
            proxy_set_header X-Original-URI         $request_uri;
            
            proxy_set_header X-Scheme               $pass_access_scheme;
            
            # Pass the original X-Forwarded-For
            proxy_set_header X-Original-Forwarded-For $http_x_forwarded_for;
            
            # mitigate HTTPoxy Vulnerability
            # https://www.nginx.com/blog/mitigating-the-httpoxy-vulnerability-with-nginx/
            proxy_set_header Proxy                  "";
            
            # Custom headers to proxied server
            
            proxy_connect_timeout                   10s;
            proxy_send_timeout                      60s;
            proxy_read_timeout                      60s;
            
            proxy_buffering                         off;
            proxy_buffer_size                       4k;
            proxy_buffers                           4 4k;
            proxy_request_buffering                 on;
            
            proxy_http_version                      1.1;
            
            proxy_cookie_domain                     off;
            proxy_cookie_path                       off;
            
            # In case of errors try the next upstream server before returning an error
            proxy_next_upstream                     error timeout;
            proxy_next_upstream_tries               3;
            
            proxy_pass http://upstream_balancer;
            
            proxy_redirect                          off;
            
        }
        
    }
    ## end server ingress.test.com

There are many configurations in Nginx.conf, which will not be explained in detail here. We will explain some common functional configurations later.

At the same time, the latest version of nginx express controller has enabled the dynamic update of Upstream by default. You can request: curl in the pod of nginx express controller. http://127.0.0.1:18080/configuration/backends See. The details are as follows:

[{"name":"default-tomcat-svc-8080","service":{"metadata":{"creationTimestamp":null},"spec":{"ports":[{"protocol":"TCP","port":8080,"targetPort":8080}],"selector":{"app":"tomcat"},"clusterIP":"172.21.6.143","type":"ClusterIP","sessionAffinity":"None"},"status":{"loadBalancer":{}}},"port":8080,"secureCACert":{"secret":"","caFilename":"","pemSha":""},"sslPassthrough":false,"endpoints":[{"address":"172.20.2.141","port":"8080"}],"sessionAffinityConfig":{"name":"","cookieSessionAffinity":{"name":"","hash":""}},"upstreamHashByConfig":{"upstream-hash-by-subset-size":3},"noServer":false,"trafficShapingPolicy":{"weight":0,"header":"","cookie":""}},{"name":"upstream-default-backend","port":0,"secureCACert":{"secret":"","caFilename":"","pemSha":""},"sslPassthrough":false,"endpoints":[{"address":"127.0.0.1","port":"8181"}],"sessionAffinityConfig":{"name":"","cookieSessionAffinity":{"name":"","hash":""}},"upstreamHashByConfig":{},"noServer":false,"trafficShapingPolicy":{"weight":0,"header":"","cookie":""}}]

We can see that there is a mapping relationship between the service associated with address and its endpoint, so that we can request the specific pod business.

For dynamic update of routing configuration, please refer to the document for details: https://yq.aliyun.com/articles/692732

In the following articles, we will talk about some common usage scenarios in detail.

Posted by Simplicity on Mon, 21 Oct 2019 02:37:14 -0700