ali lxcfs daemonset mode
At first, the privilege mode of apiserver, kubelet node is opened in accordance with the relevant documents - allow-privileged=true, and then executed with ali document, it can not run at all. Referring to issue in github, we know that it's all about why the problem doesn't work, but the response is not clear, and it does mention the need to ...
Posted by wilzy1 on Sat, 11 May 2019 01:22:20 -0700
K8S1.14 High Availability Production Cluster Deployment Scheme
system description
System Component Version
Operating System: CentOS 7.6
Kernel: 4.4
Kubernetes: v1.14.1
Docker: 18.09 (supported 1.13.1, 17.03, 17.06, 17.09, 18.06, 18.09)
Etcd: v3.3.12
Flannel: v0.11
cni-plugins: v0.7.5
CoreDNS: 1.4.0
Schematic diagram
Architecture description:
Use six hosts, three Master nodes, three node nodes
The Kube ...
Posted by aaronlzw_21 on Sat, 11 May 2019 01:18:23 -0700
Deploying High Availability rancher Cluster with Helm
Background
Rancher HA is deployed in a variety of ways:
Helm HA installation deploys Rancher in the existing Kubernetes cluster. Rancher will use the cluster etcd to store data and use Kubernetes scheduling to achieve high availability.
RKE HA installation, using RKE tools to install a separate Kubernetes cluster, dedicated to Rancher HA depl ...
Posted by scrypted on Fri, 10 May 2019 12:44:24 -0700
Kubernetes 1.13.1 + etcd 3.3.10 + flanneld 0.10 cluster deployment
New features of Kubernetes 1.13
Using kubeadm (GA) to simplify Kubernetes cluster management
Most engineers with Kubernetes should be able to use kubeadm. It is an important tool for managing the life cycle of a cluster, from creation to configuration to upgrade; now kubeadm has officially become GA. Kubeadm handles the guidan ...
Posted by phpanon on Tue, 07 May 2019 07:20:38 -0700
k8s master load balancing
k8s master load balancing
1. Server PlanningNote: Only master load balancing is implemented
Server name
IP
role
k8s-master1
192.168.1.107
k8s-master1,etcd
k8s-master2
192.168.1.108
k8s-master2
k8s-node1
192.168.1.109
k8s-node1
nginx
192.168.1.55
nginx load
2.k8s-master1 deployment1. Install Docker
#Close the firewall
ufw disab ...
Posted by Jr0x on Sat, 04 May 2019 23:50:39 -0700
K8s Exposure to Internal Services in Various Ways
hostNetWork:true
Test yaml:
apiVersion: v1
kind: Pod
metadata:
name: nginx-hostnetwork
spec:
hostNetwork: true
containers:
- name: nginx-hostnetwork
image: nginx:1.7.9
# Create pod s and test them
$ kubectl create -f nginx-hostnetwork.yaml
$ kubectl get pod --all-namespaces -o=wide | grep nginx-hostnetwork
default ...
Posted by phpr0ck5 on Wed, 24 Apr 2019 17:39:34 -0700
Source Code Analysis of Kubernetes Replication Controller
Although in Kubernetes v1.2, Kubernetes introduced the Deployments feature, Deployment manages Pod s by creating ReplicaSets, which are considered the next generation of Replication Controller. But in fact, the difference between ReplicaSet and Replication Controller is simply the type of support that its Selector supports.
ReplicaSet supports ...
Posted by bogdani on Sat, 20 Apr 2019 12:00:35 -0700
Kubernetes 1.5 Source Analysis (II) Resource Registration of apiServer
Source version
Kubernetes v1.5.0
brief introduction
There are various resources in k8s, such as Pod, Service, RC, namespaces and so on. In fact, the user operates on a large number of resources. But these resources are not messy, they are organized in the way of Group Version. Each resource belongs to a group, and there are versions of resource ...
Posted by sqlmc on Fri, 19 Apr 2019 19:30:34 -0700
kubeadm installs a cluster of master node s
Installing master node using kubeadm
1. Basic configuration preparation
# swapoff - Close swap
swapoff -a
sed -ri "/swap/s@(.*)@#/&@g" /etc/fstab
# Firewall limit - Open default iptables rules
echo -e "net.bridge.bridge-nf-call-ip6tables = 1\nnet.bridge.bridge-nf-call-iptables = 1\nnet.ipv4.ip_forward = 1" >> ...
Posted by Octave91 on Fri, 19 Apr 2019 16:12:36 -0700
dubbo Services Exposure Services in kubernetes
In some scenarios, dubbo consumer s may need to access dubbo provider s deployed in k8s during development. Especially in the self-built kubernetes cluster environment, the port of tcp is difficult to proxy, which leads to the difficulty of developing services within the development link cluster. Service can be used here to expose dubbo service ...
Posted by Jakebert on Wed, 20 Mar 2019 19:03:27 -0700