You can support us by downloading this article as PDF from the Link below. Download the guide as PDF

The Kubernetes Dashboard is a Web-based User interface that allows users to easily interact with the kubernetes cluster. It allows for users to manage, monitor and troubleshoot applications as well as the cluster. We already looked at how to deploy the dashboard in this tutorial. In this guide, we are going to explore integration of the kubernetes dashboard to Active Directory to ease user and password management.

Kubernetes supports two categories of users:

  • Service Accounts: This is a default method supported by kubernetes. One uses service account tokens to access the dashboard.
  • Normal Users: Any other authentication method configured in the cluster.

For this, we will use a project called Dex. Dex is an OpenID Connect provider done by CoreOS. It takes care of the translation between Kubernetes tokens and Active Directory users.

Kubernetes OpenID Connect Flow: Shows how Kubernetes Dashboard can Authenticate using Active directory using OIDC

Setup Requirements:

  • You will need an IP on your network for the Active Directory server. In my case, this IP will be 172.16.16.16
  • You will also need a working Kubernetes cluster. The nodes of this cluster should be able to communicate with the Active Directory IP. Take a look at how to create a kubernetes cluster using kubeadm or rke if you don’t have one yet.
  • You will also need a domain name that supports wildcard DNS entry. I will use the wildcard DNS “*.kubernetes.mydomain.com” to route external traffic to my Kubernetes cluster.

Step 1: Deploy Dex on Kubernetes Cluster

We will first need to create a namespace, create a service account for dex. Then, we will configure RBAC rules for the dex service account before we deploy it. This is to ensure that the application has proper permissions.

  1. Create a dex-namespace.yaml file.
$ vim dex-namespace.yaml
apiVersion: v1
kind: Namespace
metadata:
  name: auth-system

2. Create the namespace for Dex.

$ kubectl apply -f dex-namespace.yaml

3. Create a dex-rbac.yaml file.

$ vim dex-rbac.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
  name: dex
  namespace: auth-system
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
  name: dex
  namespace: auth-system
rules:
- apiGroups: ["dex.coreos.com"]
  resources: ["*"]
  verbs: ["*"]
- apiGroups: ["apiextensions.k8s.io"]
  resources: ["customresourcedefinitions"]
  verbs: ["create"]
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
  name: dex
  namespace: auth-system
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: dex
subjects:
- kind: ServiceAccount
  name: dex
  namespace: auth-system

4. Create the permissions for Dex.

$ kubectl apply -f dex-rbac.yaml

5. Create a dex-configmap.yaml file. Make sure you modify the issuer URL, the redirect URIs, the client secret and the Active Directory configuration accordingly.

$ vim dex-configmap.yaml
kind: ConfigMap
apiVersion: v1
metadata:
  name: dex
  namespace: auth-system
data:
  config.yaml: |
    issuer: https://auth.kubernetes.mydomain.com/
    web:
      http: 0.0.0.0:5556
    frontend:
      theme: custom
    telemetry:
      http: 0.0.0.0:5558
    staticClients:
    - id: oidc-auth-client
      redirectURIs:
      - https://kubectl.kubernetes.mydomain.com/callback
      - http://dashtest.kubernetes.mydomain.com/oauth2/callback
      name: oidc-auth-client
      secret: secret
    connectors:
    - type: ldap
      id: ldap
      name: LDAP
      config:
        host: 172.16.16.16:389
        insecureNoSSL: true
        insecureSkipVerify: true
        bindDN: ldapadmin
        bindPW: 'KJZOBwS9DtB'
        userSearch:
          baseDN: OU=computingforgeeks departments,DC=computingforgeeks ,DC=net
          username: sAMAccountName
          idAttr: sn
          nameAttr: givenName
          emailAttr: mail
        groupSearch: 
          baseDN: CN=groups,OU=computingforgeeks,DC=computingforgeeks,DC=net 
          userMatchers:
          - userAttr: sAMAccountName
            groupAttr: memberOf
          nameAttr: givenName                   
    oauth2:
      skipApprovalScreen: true
    storage:
      type: kubernetes
      config:
        inCluster: true

6. Configure Dex.

$ kubectl apply -f dex-configmap.yaml

7. Create the dex-deployment.yaml file.

$ vim dex-deployment.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  labels:
    app: dex
  name: dex
  namespace: auth-system
spec:
  replicas: 1
  selector:
    matchLabels:
      app: dex
  strategy:
    rollingUpdate:
      maxSurge: 1
      maxUnavailable: 1
    type: RollingUpdate
  template:
    metadata:
      labels:
        app: dex
        revision: "1"
    spec:
      containers:
      - command:
        - /usr/local/bin/dex
        - serve
        - /etc/dex/cfg/config.yaml
        image: quay.io/dexidp/dex:v2.17.0
        imagePullPolicy: IfNotPresent
        name: dex
        ports:
        - containerPort: 5556
          name: http
          protocol: TCP
        resources: {}
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
        volumeMounts:
        - mountPath: /etc/dex/cfg
          name: config
        - mountPath: /web/themes/custom/
          name: theme          
      dnsPolicy: ClusterFirst
      serviceAccountName: dex
      restartPolicy: Always
      schedulerName: default-scheduler
      securityContext: {}
      terminationGracePeriodSeconds: 30
      volumes:
      - configMap:
          defaultMode: 420
          items:
          - key: config.yaml
            path: config.yaml
          name: dex
        name: config
      - name: theme
        emptyDir: {}     

8. Deploy Dex.

$ kubectl apply -f dex-deployment.yaml

9. Create a dex-service.yaml file.

$ vim dex-service.yaml
apiVersion: v1
kind: Service
metadata:
  name: dex
  namespace: auth-system
spec:
  selector:
    app: dex
  ports:
  - name: dex
    port: 5556
    protocol: TCP
    targetPort: 5556

10. Create a service for the Dex deployment.

$ kubectl apply -f dex-service.yaml

11. Create a dex-ingress secret. Make sure the certificate data for the cluster is at the location specified or change this path to point to it. If you have a Certificate Manager installed in your cluster, You can skip this step.

$ kubectl create secret tls dex --key /data/Certs/ kubernetes.mydomain.com.key --cert /data/Certs/ kubernetes.mydomain.com.crt -n auth-system

12. Create a dex-ingress.yaml file. Change the host parameters and your certificate issuer name accordingly.

$ vim dex-ingress.yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: dex
  namespace: auth-system
  annotations:
    kubernetes.io/tls-acme: "true"
    ingress.kubernetes.io/force-ssl-redirect: "true"
spec:
  tls:
  - secretName: dex
    hosts:
    - auth.kubernetesuat.mydomain.com
  rules:
  - host: auth.kubernetes.mydomain.com
    http:
      paths:
      - backend:
          serviceName: dex
          servicePort: 5556

13. Create the ingress for the Dex service.

$ kubectl apply -f dex-ingress.yaml

Wait a couple of minutes until the cert manager generates a certificate for Dex. You can check if Dex is deployed properly by browsing to: https://auth.kubernetesuat.mydomain.com/.well-known/openid-configuration

Step 2: Configure the Kubernetes API to access Dex as OpenID connect provider

Next, We will look at how to configure the API server for both a RKE and Kubeadm Cluster. To enable the OIDC plugin, we need to configure the several flags on the API server as shown here:

A. RKE CLUSTER

1. SSH to your rke node.

$ ssh [email protected]

2. Edit the Kubernetes API configuration. Add the OIDC parameters and modify the issuer URL accordingly.

$ sudo vim ~/Rancher/cluster.yml
    kube-api:
      service_cluster_ip_range: 10.43.0.0/16
      # Expose a different port range for NodePort services
      service_node_port_range: 30000-32767
      extra_args:
        # Enable audit log to stdout
        audit-log-path: "-"
        # Increase number of delete workers
        delete-collection-workers: 3
        # Set the level of log output to debug-level
        v: 4
#ADD THE FOLLOWING LINES 
        oidc-issuer-url: https://auth.kubernetes.mydomain.com/    
        oidc-client-id: oidc-auth-client
        oidc-ca-file: /data/Certs/kubernetes.mydomain.com.crt
        oidc-username-claim: email
        oidc-groups-claim: groups
      extra_binds:
        - /data/Certs:/data/Certs ##ENSURE THE WILDCARD CERTIFICATES ARE PRESENT IN THIS FILE PATH IN ALL MASTER NODES

3. The Kubernetes API will restart by itself once you run an RKE UP.

$ rke up

B. KUBEADM CLUSTER

1. SSH to your node.

$ ssh [email protected]

2. Edit the Kubernetes API configuration. Add the OIDC parameters and modify the issuer URL accordingly.

$ sudo vim /etc/kubernetes/manifests/kube-apiserver.yaml
...
    command:
    - /hyperkube
    - apiserver
    - --advertise-address=10.10.40.30 
#ADD THE FOLLOWING LINES:
... 
    - --oidc-issuer-url=https://auth.kubernetes.mydomain.com/
    - --oidc-client-id=oidc-auth-client
##ENSURE THE WILDCARD CERTIFICATES ARE PRESENT IN THIS FILE PATH IN ALL MASTER NODES: 
    - --oidc-ca-file=/etc/ssl/kubernetes/kubernetes.mydomain.com.crt    
    - --oidc-username-claim=email
    - --oidc-groups-claim=groups
...

3. The Kubernetes API will restart by itself.

STEP 3: Deploy the Oauth2 proxy and configure the kubernetes dashboard ingress

1. Generate a secret for the Oauth2 proxy.

python -c 'import os,base64; print base64.urlsafe_b64encode(os.urandom(16))'

2. Copy the generated secret and use it for the OAUTH2_PROXY_COOKIE_SECRET value in the next step.

3. Create an oauth2-proxy-deployment.yaml file. Modify the OIDC client secret, the OIDC issuer URL, and the Oauth2 proxy cookie secret accordingly.

$ vim oauth2-proxy-deployment.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  labels:
    k8s-app: oauth2-proxy
  name: oauth2-proxy
  namespace: auth-system
spec:
  replicas: 1
  selector:
    matchLabels:
      k8s-app: oauth2-proxy
  template:
    metadata:
      labels:
        k8s-app: oauth2-proxy
    spec:
      containers:
      - args:
        - --cookie-secure=false
        - --provider=oidc
        - --client-id=oidc-auth-client
        - --client-secret=***********
        - --oidc-issuer-url=https://auth.kubernetes.mydomain.com/
        - --http-address=0.0.0.0:8080
        - --upstream=file:///dev/null
        - --email-domain=*
        - --set-authorization-header=true
        env:
        # docker run -ti --rm python:3-alpine python -c 'import secrets,base64; print(base64.b64encode(base64.b64encode(secrets.token_bytes(16))));'
        - name: OAUTH2_PROXY_COOKIE_SECRET
          value: ***********
        image: sguyennet/oauth2-proxy:header-2.2
        imagePullPolicy: Always
        name: oauth2-proxy
        ports:
        - containerPort: 8080
          protocol: TCP

4. Deploy the Oauth2 proxy.

$ kubectl apply -f oauth2-proxy-deployment.yaml

5. Create an oauth2-proxy-service.yaml file.

$ vim oauth2-proxy-service.yaml
apiVersion: v1
kind: Service
metadata:
  labels:
    k8s-app: oauth2-proxy
  name: oauth2-proxy
  namespace: auth-system
spec:
  ports:
  - name: http
    port: 8080
    protocol: TCP
    targetPort: 8080
  selector:
    k8s-app: oauth2-proxy

6. Create a service for the Oauth2 proxy deployment.

$ kubectl apply -f oauth2-proxy-service.yaml

7. Create a dashboard-ingress.yaml file. Modify the dashboard URLs and the host parameter accordingly.

$ vim dashboard-ingress.yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: kubernetes-dashboard
  namespace: kube-system
  annotations:
          nginx.ingress.kubernetes.io/auth-url: "https://dashboard.kubernetes.mydomain.com/oauth2/auth"
          nginx.ingress.kubernetes.io/auth-signin: "https://dashboard.kubernetes.mydomain.com/oauth2/start?rd=https://$host$request_uri$is_args$args"
          nginx.ingress.kubernetes.io/secure-backends: "true"
          nginx.ingress.kubernetes.io/configuration-snippet: |
            auth_request_set $token $upstream_http_authorization;
            proxy_set_header Authorization $token;
spec:
  rules:
  - host: dashboard.kubernetes.mydomain.com
    http:
      paths:
      - backend:
          serviceName: kubernetes-dashboard
          servicePort: 443
        path: /

8. Create the ingress for the dashboard service.

$ kubectl apply -f dashboard-ingress.yaml

9. Create a kubernetes-dashboard-external-tls ingress secret. Make sure the certificate data for the cluster is at the location specified or change this path to point to it. Skip this step if using a Certificate manager.

$ kubectl create secret tls kubernetes-dashboard-external-tls --key /data/Certs/ kubernetes.mydomain.com.key --cert /data/Certs/ kubernetes.mydomain.com.crt -n auth-system

10. Create an oauth2-proxy-ingress.yaml file. Modify the certificate manager issuer and the host parameters accordingly.

$ vim oauth2-proxy-ingress.yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  annotations:
    kubernetes.io/tls-acme: "true"
    ingress.kubernetes.io/force-ssl-redirect: "true"
  name: oauth-proxy
  namespace: auth-system
spec:
  rules:
  - host: dashboard.kubernetes.mydomain.com
    http:
      paths:
      - backend:
          serviceName: oauth2-proxy
          servicePort: 8080
        path: /oauth2
  tls:
  - hosts:
    - dashboard.kubernetes.mydomain.com
    secretName: kubernetes-dashboard-external-tls

10. Create the ingress for the Oauth2 proxy service.

$ kubectl apply -f oauth2-proxy-ingress.yaml

11. Create the role binding.

$ kubectl create rolebinding <username>-rolebinding-<namespace> --clusterrole=admin --user=<username> -n <namespace>
e.g
kubectl create rolebinding mkemei-rolebinding-default --clusterrole=admin [email protected] -n default 
// Note that usernames are case sensitive and we need to confirm the correct format before applying the rolebinding.

12. Wait a couple of minutes and browse to https://dashboard.kubernetes.mydomain.com.

13. Login with your Active Directory user.

As you can see below: [email protected] should be able to see and modify the default namespace.

Check more articles on Kubernetes:

Monitor Kubernetes Deployments with Kubernetes Operational View

How To Ship Kubernetes Logs to External Elasticsearch

How To Perform Git clone in Kubernetes Pod deployment

You can support us by downloading this article as PDF from the Link below. Download the guide as PDF