This is a directory that doesn't need to exist
kubectl top node/po show error: metrics not available yet
Version information
kubernetes 1.12
metrics-server 0.3.3
Official website: https://github.com/kubernetes-incubator/metrics-server/
Record of pits
################################## spec: containers: - command: - /metrics-server - --kubelet-preferred-address-types=InternalIP - --kubelet-insecure-tls ################################### vi /etc/systemd/system/kubelet.service.d/10-kubeadm.conf --authentication-token-webhook=true --authorization-mode=Webhook
Yaml (my version, modified)
aggregated-metrics-reader.yaml
kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1 metadata: name: system:aggregated-metrics-reader labels: rbac.authorization.k8s.io/aggregate-to-view: "true" rbac.authorization.k8s.io/aggregate-to-edit: "true" rbac.authorization.k8s.io/aggregate-to-admin: "true" rules: - apiGroups: ["metrics.k8s.io"] resources: ["pods"] verbs: ["get", "list", "watch"]
auth-delegator.yaml
apiVersion: rbac.authorization.k8s.io/v1beta1 kind: ClusterRoleBinding metadata: name: metrics-server:system:auth-delegator roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: system:auth-delegator subjects: - kind: ServiceAccount name: metrics-server namespace: kube-system
auth-reader.yaml
apiVersion: rbac.authorization.k8s.io/v1beta1 kind: RoleBinding metadata: name: metrics-server-auth-reader namespace: kube-system roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: extension-apiserver-authentication-reader subjects: - kind: ServiceAccount name: metrics-server namespace: kube-system
metrics-apiservice.yaml
apiVersion: apiregistration.k8s.io/v1beta1 kind: APIService metadata: name: v1beta1.metrics.k8s.io spec: service: name: metrics-server namespace: kube-system group: metrics.k8s.io version: v1beta1 insecureSkipTLSVerify: true groupPriorityMinimum: 100 versionPriority: 100
metrics-server-deployment.yaml
--- apiVersion: v1 kind: ServiceAccount metadata: name: metrics-server namespace: kube-system --- apiVersion: extensions/v1beta1 kind: Deployment metadata: name: metrics-server namespace: kube-system labels: k8s-app: metrics-server spec: selector: matchLabels: k8s-app: metrics-server template: metadata: name: metrics-server labels: k8s-app: metrics-server spec: serviceAccountName: metrics-server volumes: # mount in tmp so we can safely use from-scratch images and/or read-only containers - name: tmp-dir emptyDir: {} containers: - command: - /metrics-server - --kubelet-insecure-tls - --kubelet-preferred-address-types=InternalIP name: metrics-server image: k8s.gcr.io/metrics-server-amd64:v0.3.3 imagePullPolicy: Always volumeMounts: - name: tmp-dir mountPath: /tmp
metrics-server-service.yaml
apiVersion: v1 kind: Service metadata: name: metrics-server namespace: kube-system labels: kubernetes.io/name: "Metrics-server" spec: selector: k8s-app: metrics-server ports: - port: 443 protocol: TCP targetPort: 443
resource-reader.yaml
apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: system:metrics-server rules: - apiGroups: - "" resources: - pods - nodes - nodes/stats - namespaces verbs: - get - list - watch - apiGroups: - "extensions" resources: - deployments verbs: - get - list - watch --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: system:metrics-server roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: system:metrics-server subjects: - kind: ServiceAccount name: metrics-server namespace: kube-system
Readme.md
The modification of Kubei API is made by each node.
I'll see the other records
There are many kinds of args and commd parameters. If you modify them, you may not be able to solve the problem. It's mainly the api settings.
I turned GitHub into a mess, and then I got it, and then I didn't.
Summary or to write!
General conclusion: there are many immature places. Please leave a message or add me VX: youremysuperwomen45
Here's the QR Code: