k8s dashboard pod deployment
- Write yaml file
- Create pod
- Browser opens webui
Write yaml file
# cat dashboard.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
# Keep the name in sync with image version and
# gce/coreos/kube-manifests/addons/dashboard counterparts
name: kubernetes-dashboard-latest
namespace: kube-system
spec:
replicas: 1
template:
metadata:
labels:
k8s-app: kubernetes-dashboard
version: latest
kubernetes.io/cluster-service: "true"
spec:
containers:
- name: kubernetes-dashboard
image: huanwei/kubernetes-dashboard-amd64:latest
resources:
# keep request = limit to keep this container in guaranteed class
limits:
cpu: 100m
memory: 50Mi
requests:
cpu: 100m
memory: 50Mi
ports:
- containerPort: 9090
args:
- --apiserver-host=http://192.168.6.150:8080
livenessProbe:
httpGet:
path: /
port: 9090
initialDelaySeconds: 30
timeoutSeconds: 30
---
kind: Service
metadata:
name: kubernetes-dashboard
namespace: kube-system
labels:
k8s-app: kubernetes-dashboard
kubernetes.io/cluster-service: "true"
spec:
type: NodePort
selector:
k8s-app: kubernetes-dashboard
ports:
- port: 80
targetPort: 9090
Where - - apiserver-host= http://192.168.6.45:8080 For master's ip, you can't use the host name here [add host to all nodes if you have to, because you don't know which node will be assigned when you deploy pod]
Create pod
# kubectl create -f dashboard.yaml
deployment "kubernetes-dashboard-latest" created
service "kubernetes-dashboard" created
As shown in the above code, the creation is successful
Browser opens webui
==, ip, port I don't know how to know the address of webui? The so command looks at which node the pod is deployed on.
# kubectl get pods --namespace=kube-system
No resources found.
Tragedy, the creation of pod failed.
How do I check the kube log?
If logtostderr=true in the startup parameter of kubernetes indicates that the output of kubernetes is taken over using system d, you can view it with journalctl
systemd system manages the kubernetes service on Linux system, and journal system takes over the output log of the service program. It can view the log of kubernetes service through system CTL status or journalctl-u-f.
kubernetes components include:
k8s component | Relevant to log content |
---|---|
kube-apiserver | |
kube-controller-manager | Pod dilatation correlation or RC correlation |
kube-scheduler | Pod dilatation correlation or RC correlation |
kubelet | Pod Life Cycle Relevance: Create, Stop, etc. |
etcd |
Change from blog http://blog.csdn.net/huwh_/article/details/71308301
# journalctl -u kube-controller-manager | tail
FailedCreate' Error creating: No API token found for service account "default", retry after the token is automatically created and added to the service account
Identity authentication to know the cause of failure through the above error information
There are two ways to solve this problem: skipping authentication and adding authentication.
Reference blog http://blog.csdn.net/jinzhencs/article/details/51435020
This time, skip authentication is used to solve the problem, modify / etc/kubernetes/apiserver
# cat /etc/kubernetes/apiserver
###
# kubernetes system config
#
# The following values are used to configure the kube-apiserver
#
# The address on the local server to listen to.
KUBE_API_ADDRESS="--insecure-bind-address=0.0.0.0"
# The port on the local server to listen on.
KUBE_API_PORT="--port=8080"
# Port minions listen on
# KUBELET_PORT="--kubelet-port=10250"
# Comma separated list of nodes in the etcd cluster
KUBE_ETCD_SERVERS="--etcd-servers=http://127.0.0.1:2379"
# Address range to use for services
KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=10.254.0.0/16"
# default admission control policies
KUBE_ADMISSION_CONTROL="--admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota"
# Add your own!
KUBE_API_ARGS=""
KUBE_ADMISSION_CONTROL Remove Service Account Build k8s cluster I mentioned it in my blog.
Restart the master and execute the second step again.
View the pod details
# kubectl describe service/kubernetes-dashboard --namespace="kube-system"
Name: kubernetes-dashboard
Namespace: kube-system
Labels: k8s-app=kubernetes-dashboard
kubernetes.io/cluster-service=true
Selector: k8s-app=kubernetes-dashboard
Type: NodePort
IP: 10.254.235.156
Port: <unset> 80/TCP
NodePort: <unset> 31081/TCP
Endpoints: 172.17.26.2:9090
Session Affinity: None
No events.
Execute dockers in node to see which node the process exists on
# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
62630e335fc1 huanwei/kubernetes-dashboard-amd64:latest "/dashboard --port=90" About a minute ago Up About a minute k8s_kubernetes-dashboard.44479d71_kubernetes-dashboard-latest-2748740746-dj9m0_kube-system_a0cfa399-b218-11e7-a8b9-080027cd4201_90a07124
90f1a6ddaa03 registry.access.redhat.com/rhel7/pod-infrastructure:latest "/usr/bin/pod" About a minute ago Up About a minute k8s_POD.28c50bab_kubernetes-dashboard-latest-2748740746-dj9m0_kube-system_a0cfa399-b218-11e7-a8b9-080027cd4201_bd775cdb
Therefore, the access address is:
http://node2:31081/#/workload?namespace=default
Every reboot of ip and port will change, how can you use fixed address access?
In my other blog Exposing services using ingress Would mention