Boy! Kubernetes deployment is so simple, you can see it completely

Keywords: Linux Java Tomcat Docker Maven

How is it implemented to migrate the project to k8s platform?

  1. Make a mirror image
  2. Controller management Pod
  3. Pod data persistence
  4. Exposed applications
  5. Publish application to the public
  6. Log / monitor

1. There are three steps to making a mirror image

  • The first basic image is based on which operating system, such as Centos7 or other
  • The second step is middleware image, such as service image, running like nginx service and tomcat service
  • Step 3: project image, which is on top of the service image. Package your project, and the project will run in your service image

Generally, the operation and maintenance personnel do a good job of the image in advance, and the developers can use the image directly. The image must conform to the current environment deployment environment.

2. Controller management pod

That is to say, k8s is used to deploy the image. Generally, we will take the controller to deploy. The most used is deployment

  • Deployment: stateless deployment
  • Statefullset: stateful deployment
  • DaemonSet: daemons deployment
  • Job & cronjob: batch

What's the difference between stateless and stateful?

The stateful ones have identity, such as network ID, storage and these two are planned in advance, and start / stop in order

Persistence and non persistence

3. Pod data persistence

The main reason for pod data persistence is to say to an application program, such as developing a project, whether the project has landed in the local file, if so, to ensure that it has persisted, it must use pod data persistence.

There are generally three types of data during container deployment:

  • The initial data required at startup can be a configuration file
  • Temporary data generated during startup, which needs to be shared among multiple containers
  • Persistent data generated during startup

4. Exposed applications

In k8s, a deployment cannot be accessed externally. That is to say, if other applications want to access the deployed deployment, they cannot find out how to access it. Why do you say that? Because deployment is usually deployed with multiple copies, which may be distributed on different nodes, and the reconstructed pod ip will also change, and it will also change if you republish it, so there is no way to fix which pod to access. Even if it is fixed, its pod cannot access it.

In order to provide services for multiple pods, a load balance must be added in the front to provide an access portal. Only by accessing this unified portal can it be forwarded to multiple pods at the back end, and only by accessing this Cluster IP can it be forwarded to the pod at the back end.

Service

  • Service defines the logical set of Pod and the strategy to access the set
  • Service is introduced to solve the dynamic change of Pod and provide service discovery and load balancing
  • Use CoreDNS to resolve Service names

5. Publish application to the public

After exposure, users need to visit. For example, to build an e-commerce website for users to visit. Compared with service, ingress is a complementary state, which makes up for their own. Service mainly provides access within the cluster, and also exposes a TCP/UDP port. While ingress is mainly a 7-layer forwarding, which provides a unified access As long as you visit the address controller, it can help you forward all your deployment projects, that is, all projects are accessed by domain name.

First, the developers deploy the code to your code warehouse, the mainstream Git or gitlab. After submitting the code, they need to pull, compile and build the code through the CI/CD platform to generate a War package, which is then delivered to Ansible and sent to the virtual machine / physical machine, Then the project will be exposed through load balancing, and then there will be database, monitoring system and log system to provide relevant services.

First of all, the development puts the code in the code warehouse, and then uses jenkins to pull the code, compile and upload it to our image warehouse.

This is to package the code into an image instead of a war or jar package that can be executed. This image contains the running environment and project code of your project. This image can be run on any docker and can be accessed. First, you need to ensure that it can be deployed on the docker, and then deployed on the k8s. The printed image is put in the image warehouse to collect To manage these images.

Because dozens or hundreds of images are generated every day, they must be managed through the image warehouse. Here, a script may be written to connect k8smaster, and k8s will schedule these pods according to its own deployment, Then, we will publish our applications through ingress for users to access. Each ingress will be associated with a group of pods, and service will create a load balancing of this group of pods, and distinguish the pods on these nodes through service.

Then the database is placed outside the cluster, and the monitoring system log system can also be placed outside the k8s cluster for deployment. We are placed inside the k8s cluster, which is not particularly sensitive. It is mainly used for operation and maintenance, development and debugging, which will not affect our business, so we prefer to deploy in the k8s.

Now deploy a JAVA project to our k8s

1, Install an openjdk

[root@k8s-master ~]# yum -y install java-1.8.0-openjdk.x86_64 maven
[root@k8s-master ~]# java -version
openjdk version "1.8.0_222"
OpenJDK Runtime Environment (build 1.8.0_222-b10)
OpenJDK 64-Bit Server VM (build 25.222-b10, mixed mode)

Then we pull the code to the local common Dockerfile and put it in the same directory as our code,

[root@k8s-master tomcat-java-demo-master]# ls
db  Dockerfile  LICENSE  pom.xml  README.md  src
[root@k8s-master tomcat-java-demo-master]# vim Dockerfile
FROM lizhenliang/tomcat
LABEL maintainer zhaochengcheng
RUN rm -rf /usr/local/tomcat/webapps/*
ADD target/*.war /usr/local/tomcat/webapps/ROOT.war

2, Compile

Here we need to configure the domestic source of maven, so it will be faster

[root@k8s-master CI]# vim /etc/maven/settings.xml
<mirror>      
  <id>central</id>      
  <mirrorOf>central</mirrorOf>      
  <name>aliyun maven</name>  
   <url>https://maven.aliyun.com/repository/public</url> 
  </mirror>  
</mirrors>
[root@k8s-master tomcat-java-demo-master]# mvn clean package -D maven test.skip=true  
[root@k8s-master tomcat-java-demo-master]# ls  
db Dockerfile LICENSE pom.xml README.md src target  
[root@k8s-master tomcat-java-demo-master]# cd target/  
[root@k8s-master target]# ls  
classes generated-sources ly-simple-tomcat-0.0.1-SNAPSHOT ly-simple-tomcat-0.0.1-SNAPSHOT.war maven-archiver maven-status  
[root@k8s-master tomcat-java-demo-master]# cd target/

We will use the compiled war package, then print it as an image and upload it to our Harbor warehouse

[root@k8s-master target]# ls
classes            ly-simple-tomcat-0.0.1-SNAPSHOT      maven-archivergenerated-sources  
ly-simple-tomcat-0.0.1-SNAPSHOT.war  maven-status

[root@k8s-master tomcat-java-demo-master]# docker build -t 192.168.30.24/library/java-demo:latest .

3, Upload to image warehouse

[root@k8s-master tomcat-java-demo-master]# docker login 192.168.30.24
Username: admin
Password:
Error response from daemon: Get https://192.168.30.24/v2/: dial tcp 192.168.30.24:443: connect: connection refused

The error is reported here. In fact, we need to write the trust to the harbor warehouse under each docker. The upload image will also be used later

[root@k8s-master java-demo]# vim /etc/docker/daemon.json
{        
  "registry-mirrors":["http://f1361db2.m.daocloud.io"],     
  "insecure-registries": ["192.168.30.24"]
    }

Just record the push again

[root@k8s-master tomcat-java-demo-master]# docker push 192.168.30.24/library/java-demo:latest

4, Controller management pod

Write the deployment. Generally, projects are written to the user-defined namespace. The name is written to the project name for easy memory. Name: Tomcat Java demo
namespace: test

In addition, the name of the next project is divided into several parts. Generally, there are many components. So you can write the name of an app, such as component 1, component 2, and component 3. At least the tag has these two dimensions
project: www
app: java-demo

The other is image pull. Which warehouse to download? Here I suggest that the project name of image warehouse should be the same as what we defined to avoid confusion. I relabel it and send it to our private image warehouse

[root@k8s-master java-demo]# docker tag 192.168.30.24/library/java-demo  192.168.30.24/tomcat-java-demo/java-demo

[root@k8s-master java-demo]# docker push 192.168.30.24/tomcat-java-demo/java-demo:latest

Change the image address

imagePullSecrets:  
 - name: registry-pull-secret  
containers:  
 - name: tomcat  
image: 192.168.30.24/tomcat-java-demo/java-demo:latest

Now create yaml

Create a namespace for the project

[root@k8s-master java-demo]# vim namespace.yaml  
apiVersion: v1  
kind: Namespace  
metadata:  
name: test  
  
[root@k8s-master java-demo]# kubectl create -f namespace.yaml  
namespace/test created  
[root@k8s-master java-demo]# kubectl get ns  
NAME STATUS AGE  
default Active 22h  
kube-node-lease Active 22h  
kube-public Active 22h  
kube-system Active 22h  
test Active 5s

Create a secret to ensure the authentication information of our harbor image warehouse. Here, we must write the namespace of our project.

[root@k8s-master java-demo]# kubectl create secret docker-registry registry-pull-secret --docker-username=admin --docker-password=Harbor12345 --docker-email=111@qq.com --docker-server=192.168.30.24 -n test  
secret/registry-pull-secret created  
[root@k8s-master java-demo]# kubectl get ns  
NAME STATUS AGE  
default Active 23h  
kube-node-lease Active 23h  
kube-public Active 23h  
kube-system Active 23h  
test Active 6m39s  
[root@k8s-master java-demo]# kubectl get secret  
NAME TYPE DATA AGE  
default-token-2vtgm kubernetes.io/service-account-token 3 23h  
registry-pull-secret kubernetes.io/dockerconfigjson 1 46s  
  
[root@k8s-master java-demo]# vim deployment.yaml  
apiVersion: apps/v1beta1  
kind: Deployment  
metadata:  
name: tomcat-java-demo  
namespace: test  
spec:  
replicas: 3  
selector:  
matchLabels:  
project: www  
app: java-demo  
template:  
metadata:  
labels:  
project: www  
app: java-demo  
spec:  
imagePullSecrets:  
 - name: registry-pull-secret  
containers:  
 - name: tomcat  
image: 192.168.30.24/tomcat-java-demo/java-demo:latest  
imagePullPolicy: Always  
ports:  
 - containerPort: 8080  
name: web  
protocol: TCP  
resources:  
requests:  
cpu: 0.5  
memory: 1Gi  
limits:  
cpu: 1  
memory: 2Gi  
livenessProbe:  
httpGet:  
path: /  
port: 8080  
initialDelaySeconds: 60  
timeoutSeconds: 20  
readinessProbe:  
httpGet:  
path: /  
port: 8080  
initialDelaySeconds: 60  
timeoutSeconds: 20  
  
[root@k8s-master java-demo]# kubectl get pod -n test  
NAME READY STATUS RESTARTS AGE  
tomcat-java-demo-6d798c6996-fjjvk 1/1 Running 0 2m58s  
tomcat-java-demo-6d798c6996-lbklf 1/1 Running 0 2m58s  
tomcat-java-demo-6d798c6996-strth 1/1 Running 0 2m58s

In addition, to expose a Service, the labels here should be consistent. Otherwise, if he can't find the corresponding labels, he can't provide services. Here, we use ingress to access and publish the Service. We can use ClusterIP directly

[root@k8s-master java-demo]# vim service.yaml  
apiVersion: v1  
kind: Service  
metadata:  
name: tomcat-java-demo  
namespace: test  
spec:  
selector:  
project: www  
app: java-demo  
ports:  
 - name: web  
port: 80  
targetPort: 8080  
  
[root@k8s-master java-demo]# kubectl get pod,svc -n test  
NAME READY STATUS RESTARTS AGE  
pod/tomcat-java-demo-6d798c6996-fjjvk 1/1 Running 0 37m  
pod/tomcat-java-demo-6d798c6996-lbklf 1/1 Running 0 37m  
pod/tomcat-java-demo-6d798c6996-strth 1/1 Running 0 37m  
  
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE  
service/tomcat-java-demo ClusterIP 10.1.175.191 <none> 80/TCP 19s

It's OK to test and access our project. Now it's time to publish it through ingress

It's OK to test and access our project. Now it's time to publish it through ingress

[root@k8s-master java-demo]# curl 10.1.175.191  
<!DOCTYPE html>  
<html>  
<head lang="en">  
<meta charset="utf-8">  
<meta http-equiv="X-UA-Compatible" content="IE=edge">  
<title>Take beauty home application case</title>  
<meta name="description" content="Take beauty home application case">  
<meta name="keywords" content="index">

Now deploy an ingress nginx controller, which can be found on the Internet, and it's official. I deploy it in the way of DaemonSet, so each node will run a controller.

[root@k8s-master java-demo]# kubectl get pod -n ingress-nginx
NAME                             READY   STATUS    RESTARTS   AGE
nginx-ingress-controller-g95pp   1/1     Running   0          3m6s
nginx-ingress-controller-wq6l6   1/1     Running   0          3m6s

Publish app

Two points should be noted here. The first is the domain name of the website, and the second is the namespace of the service.

[root@k8s-master java-demo]# kubectl get pod,svc -n test  
NAME READY STATUS RESTARTS AGE  
pod/tomcat-java-demo-6d798c6996-fjjvk 1/1 Running 0 53m  
pod/tomcat-java-demo-6d798c6996-lbklf 1/1 Running 0 53m  
pod/tomcat-java-demo-6d798c6996-strth 1/1 Running 0 53m  
  
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE  
service/tomcat-java-demo ClusterIP 10.1.175.191 <none> 80/TCP 16m  
[root@k8s-master java-demo]# vim service.yaml  
[root@k8s-master java-demo]# kubectl create -f ingress.yaml  
apiVersion: extensions/v1beta1  
kind: Ingress  
metadata:  
name: tomcat-java-demo  
namespace: test  
spec:  
rules:  
 - host: java.maidikebi.com  
http:  
paths:  
 - path: /  
backend:  
serviceName: tomcat-java-demo  
servicePort: 80

In addition, I tested it here, so bind my local hosts to access it. Add domain name and node ip in the hosts file to access our project.

Posted by wenquxing on Wed, 10 Jun 2020 19:44:45 -0700