Securing Kubernetes Dashboards: SSO Authentication and RBAC Implementation with Okta and OAuth2 Proxy

Manish Chaudhary
6 min readMay 4, 2024

--

Scenario/Problem:- We had this requirement of integrating our internal applications with Okta for SSO. In these internal applications, One of the applications that we use is Kubernetes dashboards. Previously, users accessed these K8s dashboards using two types of Service Account (SA) tokens: admin and read-only.

However, we sought to streamline this process by eliminating the need for SA tokens altogether. Instead, we envisioned having SSO and a seamless login experience where users could authenticate via Okta as the Identity Provider (IDP). Furthermore, we aimed to dynamically assign access and roles based on Okta groups.

For instance, if a user belonged to the SRE group in Okta, they would be granted admin privileges on the Kubernetes dashboard. Conversely, users belonging to other groups would be limited to read-only access.

This transformation not only simplifies the authentication process but also ensures that access privileges align closely with organizational roles and responsibilities.

Kubernetes Dashboard supports two different ways of authenticating users:

  • Authorization header passed in every request to Dashboard. Supported from release 1.6. Has the highest priority. If present, the login view will be skipped.
  • Bearer Token that can be used on the Dashboard login view.

We will be passing the Auth header in the request to skip the default k8s dash login view.

General K8s Dashboard login Flow via Okta and Oauth2Proxy

  1. Users try to access the K8s Dashboard from the Okta dashboard or directly via k8s dashboard URL.
  2. The user gets redirected to oauth2-proxy.
  3. The user gets authorized.
  4. The authorized header is passed to the k8s dashboard ingress
  5. User successfully log-in to K8s Dashboard

Technologies Used

Now Let’s get down to business

1. Create an Okta Application for Kubernetes Dashboard

  • Choose OIDC and Web Application
  • Add Your internal application’s URL in the redirect URI.
  • Add a groups claim for the org authorization server

Use these steps to create a groups claim for an OpenID Connect client app. This approach is recommended if you’re using only Okta-sourced groups. For an org authorization server, you can only create an ID token with a groups claim, not an access token. See Authorization servers for more information on the types of authorization servers available to you and what you can use them for.

  1. After saving the above Okta application.
  2. Go to the Sign On tab and click Edit in the OpenID Connect ID Token section.
  3. In the Group Claim Type section, you can select either Filter or Expression. For this example, leave Filter selected.
  4. In the Group Claims Filter section, leave the default name groups (or add it if the box is empty), and then add the appropriate filter. For this example, select Matches regex and enter .* to return the user's groups. See Okta Expression Language Group Functions for more information on expressions.
  5. Click Save.

2. Deploy Kubernetes Dashboard in EKS

kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.4.0/aio/deploy/recommended.yaml

3. Deploy Nginx Ingress controller in EKS

kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.1.0/deploy/static/provider/cloud/deploy.yaml

4. Deploy Oauth2-Proxy and its service in EKS

apiVersion: apps/v1
kind: Deployment
metadata:
labels:
k8s-app: oauth2-proxy
name: oauth2-proxy
namespace: monitoring
spec:
replicas: 1
selector:
matchLabels:
k8s-app: oauth2-proxy
template:
metadata:
labels:
k8s-app: oauth2-proxy
spec:
containers:
- args:
- --provider=oidc
- --email-domain=*
- --http-address=0.0.0.0:4180
env:
- name: OAUTH2_PROXY_OIDC_ISSUER_URL
value: <your okta url>
- name: OAUTH2_PROXY_REDIRECT_URL
value: http://<your k8s dash uri>/oauth2/callback
- name: OAUTH2_PROXY_CLIENT_ID
value: <k8s dash app client id>
- name: OAUTH2_PROXY_CLIENT_SECRET
value: <k8s dash app client secret>
- name: OAUTH2_PROXY_COOKIE_SECRET
value: kgKUT3IMmESA81VWXvRpYIYwMSo1xndwIogUks6IS00=
- name: OAUTH2_PROXY_UPSTREAM
value: <your upstrea k8s dash url>
- name: OAUTH2_PROXY_SSL_INSECURE_SKIP_VERIFY
value: "true"
- name: OAUTH2_PROXY_INSECURE_OIDC_ALLOW_UNVERIFIED_EMAIL
value: "true"
- name: OAUTH2_PROXY_PASS_AUTHORIZATION_HEADER
value: "true"
- name: OAUTH2_PROXY_SSL_UPSTREAM_INSECURE_SKIP_VERIFY
value: "true"
- name: OAUTH2_PROXY_OIDC_EMAIL_CLAIM
value: email
- name: OAUTH2_PROXY_GROUPS_CLAIM
value: groups
- name: OAUTH2_PROXY_SKIP_PROVIDER_BUTTON
value: "true"
- name: OAUTH2_PROXY_SET_AUTHORIZATION_HEADER
value: "true"
image: quay.io/oauth2-proxy/oauth2-proxy:latest
imagePullPolicy: Always
name: oauth2-proxy
ports:
- containerPort: 4180
protocol: TCP

---

apiVersion: v1
kind: Service
metadata:
labels:
k8s-app: oauth2-proxy
name: oauth2-proxy
namespace: monitoring
spec:
ports:
- name: http
port: 4180
protocol: TCP
targetPort: 4180
selector:
k8s-app: oauth2-proxy

5. Deploy Ingress for K8s Dashboard and Oauth2-Proxy

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: oauth2-proxy
namespace: monitoring
spec:
ingressClassName: nginx
rules:
- host: <your ingress/k8s dash url>
http:
paths:
- path: /oauth2
pathType: Prefix
backend:
service:
name: oauth2-proxy
port:
number: 4180

---

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
nginx.ingress.kubernetes.io/auth-url: "https://$host/oauth2/auth"
nginx.ingress.kubernetes.io/auth-signin: "http://$host/oauth2/start?rd=$escaped_request_uri"
nginx.ingress.kubernetes.io/auth-response-headers: "authorization"
name: external-auth-oauth2
namespace: monitoring
spec:
ingressClassName: nginx
rules:
- host: <your ingress/k8s dash url>
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: kubernetes-dashboard
port:
number: 80

Now, Hitting your Kubernetes dashboard URL will redirect to Okta for authentication.

After a successful Login, This will redirect you to your Kubernetes dashboard, But you will see an error like below.

You might be wondering why we’re encountering this error despite successfully logging in. The reason is that our integration of Okta, OAuth2 Proxy, and Kubernetes Dashboard is functioning flawlessly. However, the logged-in user currently lacks permission to perform any actions.

Now to fix this, We have to configure RBAC on the Kubernetes side to allow your user (or the groups they are in) to access resources within Kubernetes and configure OIDC on the EKS side.

6. Add Okta as an OIDC Provider on Your EKS Cluster

Now let’s get back to the AWS Console:

  • Open the eks cluster view. Go to the Configuration tab, then select Authentication, and click on Associate Identity Provider.

Enter the following parameters:

  • Name: Okta
  • Issuer URL: This is the URL you copied earlier from your Okta AuthZ Server.
  • Client ID: This is the value you copied earlier from your Okta OIDC client.
  • Username claim: email
  • Groups claim: groups

Then Save.

7. Configure RBAC to allow users access k8s dashboard based on their okta groups

For this, We have to create cluster roles and cluster role bindings for admin and read-only access, These Cluster roles will apply to user login from Okta.

I already have a cluster-admin cluster role in my EKS, So I will just create a cluster role binding to allow my okta group to have admin access on k8s dashboard

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: admin-user-binding
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
- kind: Group
name: SRE #Note-> This is your okta group. All the users in this group will have admin access on k8s dash
apiGroup: rbac.authorization.k8s.io

Now, To provide every other user that is not part of our SRE group in Okta, with read access. We will create read cluster role and cluster role bindings and in this cluster role binding, we will bind the “Everyone” group. By default, in Okta all the users are part of the Everyone group.

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: read-only-cluster-role
rules:
- apiGroups:
- ""
resources:
- bindings
- componentstatuses
- configmaps
- endpoints
- events
- limitranges
- namespaces
- nodes
- persistentvolumeclaims
- persistentvolumes
- pods
- podtemplates
- replicationcontrollers
- resourcequotas
- serviceaccounts
- services
- pods/log
verbs:
- get
- list
- watch
- apiGroups:
- extensions
resources:
- "*"
verbs:
- get
- list
- watch
- apiGroups:
- networking.k8s.io
resources:
- "*"
verbs:
- get
- list
- watch
- apiGroups:
- apps
resources:
- "*"
verbs:
- get
- list
- watch
- apiGroups:
- batch
resources:
- "*"
verbs:
- get
- list
- watch
- apiGroups:
- autoscaling
resources:
- horizontalpodautoscalers
verbs:
- get
- list
- watch

---

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: read-user-binding
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: read-only-cluster-role
- kind: Group
name: Everyone
apiGroup: rbac.authorization.k8s.io

Once these cluster roles and bindings are created, and upon successful login to your Kubernetes dashboard via Okta, users with admin privileges will be able to access all resources, while those not belonging to the SRE group will have read-only access.

PS:- If you are trying to perform the above steps and you encounter any issues, feel free to comment here or connect with me over LinkedIn. I will be happy to help.

https://www.linkedin.com/in/imanishchaudhary/

--

--