Accessing API Server with Service Account in Kubernetes Pod

Keywords: Kubernetes Ubuntu Docker encoding

Kubernetes API Server Is the whole Kubernetes Cluster At the core, we not only need to access API Server from outside the cluster, but sometimes we also need to access API Server from inside Pod.

However, in the production environment, Kubernetes API Server is "fortified". In " Security Configuration of Kubernetes Cluster In this article, I mentioned that Kubernetes makes client requests through client cert, static token, basic auth, etc. Identity verification . For processes running in Pod, sometimes these methods are appropriate, but sometimes, information such as client cert, static token or basic auth is not easy to expose to processes in Pod. And the requests validated by API Server through these methods are fully authorized and can be operated arbitrarily. Kubernetes cluster This obviously does not meet the safety requirements. For this reason, Kubernetes is more recommended for use. service account This kind of plan. This article will give you a detailed description of how to access API Server from a Pod through service account.

Zero. Test environment

The experimental environment of this paper is Kubernetes 1.3.7 cluster, two nodes, master load bearing. Cluster passes kube-up.sh For specific methods of construction, see An article shows you how to install Kubernetes>.

1. What is service account?

What is service account? As the name implies, service account is the account used by Process in Pod to access Kubernetes API, which provides an identity for Process in Pod, relative to user account (e.g. user account is used when kubectl accesses APIServer). Compared with the global permissions of user account, service account is more suitable for lightweight task s and focuses more on Authorizing the use of processes in certain specific Pods.

As a resource, service account exists in Kubernetes cluster. We can get the list of service accounts in the current cluster through kubectl:

# kubectl get serviceaccount --all-namespaces
NAMESPACE                    NAME           SECRETS   AGE
default                      default        1         140d
kube-system                  default        1         140d

Let's take a look at the details of service account named "default" under kube-system namespace:

# kubectl describe serviceaccount/default -n kube-system
Name:        default
Namespace:    kube-system
Labels:        <none>

Image pull secrets:    <none>

Mountable secrets:     default-token-hpni0

Tokens:                default-token-hpni0

We see that service account is not complicated, but is associated with a secret resource as token, which is also called service-account-token. This token really works in API Server authentication:

# kubectl get secret  -n kube-system
NAME                  TYPE                                  DATA      AGE
default-token-hpni0   kubernetes.io/service-account-token   3         140d

# kubectl get secret default-token-hpni0 -o yaml -n kube-system
apiVersion: v1
data:
  ca.crt: {base64 encoding of ca.crt data}
  namespace: a3ViZS1zeXN0ZW0=
  token: {base64 encoding of bearer token}

kind: Secret
metadata:
  annotations:
    kubernetes.io/service-account.name: default
    kubernetes.io/service-account.uid: 90ded7ff-9120-11e6-a0a6-00163e1625a9
  creationTimestamp: 2016-10-13T08:39:33Z
  name: default-token-hpni0
  namespace: kube-system
  resourceVersion: "2864"
  selfLink: /api/v1/namespaces/kube-system/secrets/default-token-hpni0
  uid: 90e71909-9120-11e6-a0a6-00163e1625a9
type: kubernetes.io/service-account-token

We see that secret resources of this type, service-account-token, contain three parts of data: ca.crt, namespace, and token.

  • ca.crt
    This is API Server's CA Public Key Certificate Used by Process in Pod to verify API Server's service-side digital certificate;

  • namespace
    This is the base64 encoding of the value of Secret's namespace: echo-n "kube-system"| base64 => "a3ViZS1zeXN0ZW0="

  • token

This is a base64 encoding of bearer tokens signed with API Server private key. It will be useful in the API Server authentication link.

Service account authentication of API Server

As mentioned earlier, service account provides an identity for Process in Pod. In the authentication link of Kubernetes, the user name of Pod provided by a service account is:

system:serviceaccount:(NAMESPACE):(SERVICEACCOUNT)

Take the "default" service account under the above kube-system namespace as an example, and the username of its od is called as:

system:serviceaccount:kube-system:default

With username, what about credentials? That's token in service-account-token mentioned above. In " Security Configuration of Kubernetes Cluster As we mentioned in the article, API Server's authenticating links support a variety of authentication methods: client cert, bearer token, static password auth and so on. One of these methods is authenticating (Kubernetes API Server will try one by one), then the authentication will pass. Once API Server finds that client-initiated requests use service account token, API Server automatically uses signed bearer token for authentication. The request participates in the validation using the service account token it carries. This token is generated by API Server using API server startup parameter when creating service account: - service-account-key-file value signature. If service-account-key-file does not pass in any value, the value of tls-private-key-file, or API Server's private key, will be used by default.

After authenticating, API Server will execute Pod username according to the permissions of the group: system: service accounts and system: service accounts:(NAMESPACE) authority Sum admission control Processing of two links. In these two links, cluster administrators can refine the privileges of service account.

3. Default service account

Kubernetes automatically creates a default service account resource for namespace in each cluster and names it "default":

# kubectl get serviceaccount --all-namespaces
NAMESPACE                    NAME           SECRETS   AGE
default                      default        1         140d
kube-system                  default        1         140d

If the spec.serviceAccount field value is not explicitly specified in Pod, Kubernetes will automatically mount the default service account under the namespace into the OD created in the namespace. Let's take namespace "default" as an example. Let's take a look at one of the Pod's messages:

# kubectl describe pod/index-api-2822468404-4oofr
Name:        index-api-2822468404-4oofr
Namespace:    default
... ...

Containers:
  index-api:
   ... ...
    Volume Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-40z0x (ro)
    Environment Variables:    <none>
... ...
Volumes:
... ...
  default-token-40z0x:
    Type:    Secret (a volume populated by a Secret)
    SecretName:    default-token-40z0x

QoS Class:    BestEffort
Tolerations:    <none>
No events.

As you can see, kubernetes mounts the service account token of service account "default" in default namespace into the / var / run / secrets / kubernetes. IO / service account path of the container in Pod.

Deep inside the container to see the structure under mount's service account path:

# docker exec 3d11ee06e0f8 ls  /var/run/secrets/kubernetes.io/serviceaccount
ca.crt
namespace
token

These three files correspond to the data in the token of the service account mentioned above.

Default service account does't work

As mentioned above, each Pod is automatically mounted with a default service account in its namespace, which is used when Process accesses API Server in that Pod. How can Process in Pod use this service account? Kubernetes officially provided one client-go The project can show you how to access API Server using service account. Here we test whether we can successfully access API Server based on examples/in-cluster/main.go in the client-go project.

First download the client-go source code:

# go get k8s.io/client-go

# ls -F
CHANGELOG.md  dynamic/   Godeps/     INSTALL.md   LICENSE   OWNERS  plugin/    rest/     third_party/  transport/  vendor/
discovery/    examples/  informers/  kubernetes/  listers/  pkg/    README.md  testing/  tools/        util/

Let's transform examples/in-cluster/main.go. Considering that panic can cause inconvenience in viewing Pod logs, we change panic to output to "standard output" without return ing, so that Pod periodically outputs related logs, even if fail:

// k8s.io/client-go/examples/in-cluster/main.go
... ...
func main() {
    // creates the in-cluster config
    config, err := rest.InClusterConfig()
    if err != nil {
        fmt.Println(err)
    }
    // creates the clientset
    clientset, err := kubernetes.NewForConfig(config)
    if err != nil {
        fmt.Println(err)
    }
    for {
        pods, err := clientset.CoreV1().Pods("").List(metav1.ListOptions{})
        if err != nil {
            fmt.Println(err)
        } else {
            fmt.Printf("There are %d pods in the cluster\n", len(pods.Items))
        }
        time.Sleep(10 * time.Second)
    }
}

Based on the default output of the main.go go go build, create a simple Docker file:

From ubuntu:14.04
MAINTAINER Tony Bai <bigwhite.cn@gmail.com>

COPY main /root/main
RUN chmod +x /root/main
WORKDIR /root
ENTRYPOINT ["/root/main"]

Build a docker image for testing:

# docker build -t k8s/example1:latest .
... ...

# docker images|grep k8s
k8s/example1                                                  latest              ceb3efdb2f91        14 hours ago        264.4 MB

Create a deployment manifest:

//main.yaml

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: k8s-example1
spec:
  replicas: 1
  template:
    metadata:
      labels:
        run: k8s-example1
    spec:
      containers:
      - name: k8s-example1
        image: k8s/example1:latest
        imagePullPolicy: IfNotPresent

Let's create the deployment (kubectl create-f main.yaml-n kube-system) and see if the main program in Pod can successfully access API Server:

# kubectl logs k8s-example1-1569038391-jfxhx
the server has asked for the client to provide credentials (get pods)
the server has asked for the client to provide credentials (get pods)

API Server log(/var/log/upstart/kube-apiserver.log):

E0302 15:45:40.944496   12902 handlers.go:54] Unable to authenticate the request due to an error: crypto/rsa: verification error
E0302 15:45:50.946598   12902 handlers.go:54] Unable to authenticate the request due to an error: crypto/rsa: verification error
E0302 15:46:00.948398   12902 handlers.go:54] Unable to authenticate the request due to an error: crypto/rsa: verification error

Wrong! "default" service account under kube-system namespace doesn't seem to work well. (Note: This is in the kubernetes 1.3.7 environment).

5. Create a new self-service account

In kubernetes github issues, there are many issues about the problem that "default" service account is not easy to use. The solution given seems to be to create a new service account.

The creation of service account is very simple. We create a service account. yaml:

//serviceaccount.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
  name: k8s-example1

Create the service account:

# kubectl create -f serviceaccount.yaml
serviceaccount "k8s-example1" created

# kubectl get serviceaccount
NAME           SECRETS   AGE
default        1         139d
k8s-example1   1         12s

Modify main.yaml to allow Pod to display using the new service account:

//main.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: k8s-example1
spec:
  replicas: 1
  template:
    metadata:
      labels:
        run: k8s-example1
    spec:
      serviceAccount: k8s-example1
      containers:
      - name: k8s-example1
        image: k8s/example1:latest
        imagePullPolicy: IfNotPresent

Okay, let's recreate the deployment and check the Pod log:

# kubectl logs k8s-example1-456041623-rqj87
There are 14 pods in the cluster
There are 14 pods in the cluster
... ...

We see that the main program successfully passes the API Server authentication link using the new service account and gets the cluster information.

Six, Epilogue

In my other one k8s 1.5.1 environment installed using kubeadm I repeated the simple test above, but this time I used default service account directly. stay k8s 1.5.1 Next, the result of pod execution is ok, that is to say, through default service account, our client-go in-cluster example program can successfully pass the authentication of API Server and get the relevant Pods meta-information.

VII. References

Posted by skalooky on Thu, 03 Jan 2019 00:03:08 -0800