Protect Kubernetes Dashboard using oauth2-proxy and Keycloak
Kubernetes Dashboard is an excellent web client for Kubernetes clusters. Even if I prefer locally installed clients (kubectl
and k9s
are enough for me :-D), a web UI is handy when you have a random group of users (developers?) and you don’t want to give them access to the API server, or you don’t want to force them to install and configure a Kubernetes client only to open the logs of some pod once a week.
The Kubernetes Dashboard is very simple: it’s a Single-Page application that uses a web server component to serve static files and bridge requests to the API server. The API server should be reachable only by the Dashboard server instance itself. The Dashboard is using a token provided by the user to authenticate against the API server.
My playbook for deploying Kubernetes Dashboard includes OAuth2-proxy as a “proxy” to authenticate users and provide a token to the Dashboard itself for the Kubernetes API.
Pre: OIDC / OAuth
I assume that you have some basic knowledge of OIDC (vocabulary, flows, etc.). If you don’t know OIDC/OAuth protocol, you can learn the basics in this beautiful guide: https://developer.okta.com/blog/2019/10/21/illustrated-guide-to-oauth-and-oidc
Configure Keycloak
Keycloak is an identity and access management solution, OIDC/OAuth and SAML compatible. It implements common flows for identity management: registration, password recovery, simple or MFA authentication, etc. The user database can be internal or external (LDAP/Kerberos).
Also, it can be configured as Identity broker, meaning that you can link identity providers to Keycloak (for example, GitHub) to allow users to use their external identity. Applications don’t need to support other providers, as Keycloak will act as a proxy for them.
I won’t go down in the rabbit hole, and I’ll assume that you already configured Keycloak for identity brokering or identity source. Let’s configure it for our Dashboard!
- Create a new scope in “Client scopes”. Name it
groups
(you can use a custom name, remember to change it in the rest of the config). This name will be a newopenid-connect
protocol client scope. - Add a new mapper in “Mappers” for the new
groups
scope. The new mapper should have these settings:- Name:
groups
- Mapper type:
Group Membership
- Token claim name:
groups
- Full group path: ON if you want to address groups using the full path in Keycloak (e.g.
/biggergroup/smallergroup
), OFF if you want to use the plain group name only - Add to ID token: ON
- Add to access token: ON
- Add to userinfo: ON
- Name:
- Add a new client for the oauth2-proxy. Use these settings:
- Client-protocol:
openid-connect
- Access type:
confidential
- Valid redirect URIs:
https://oauth2-host/oauth2/callback
(wherehttps://oauth2-host
is the oauth2-proxy public address) - In “Client Scopes”, assign
groups
scope to the client - In “Credentials”, use “Client Id and Secret”, generate a new secret (if there is none) and take note of it
- Client-protocol:
Keycloak is now ready to provide the service for oauth2 :-)
Prepare Kubernetes cluster
The Kubernetes API has different security configurations for both authentication and authorization. We’ll see how to use OIDC token authentication and RBAC authorization. The former will allow us to use Keycloak as Identity provider for Kubernetes API, while we’ll use the latter to specify “which group has access to what” (the “group” from Keycloak will be our “role” in the Kubernetes cluster).
First, we need to tell the Kubernetes cluster to accept OIDC tokens from our Keycloak installation. To do so, apiserver
must be started using the following options:
# This is the base URL for the realm
# e.g. if the realm is "test1", the URL will be http://keycloak-server/auth/realms/test1
--oidc-issuer-url=https://keycloak-server/auth/realms/test1
# This is the client ID provided by the OIDC provider
--oidc-client-id=kubernetes
# If the OIDC provider is using a certificate signed by an internal authority, use this option to inject the CA certificate
--oidc-ca-file=/etc/kubernetes/ssl/dex-ca.pem
# This is the claim used for identifying the user inside Kubernetes
# Note that everything except email claim will be considered only if they have a prefix (see below)
--oidc-username-claim=email
# Prefix (inside the Kubernetes cluster) for the claim above
--oidc-username-prefix=oidc:
# Group claim settings (same as the username claim, but for groups)
--oidc-groups-claim=groups
--oidc-groups-prefix=oidc:
K3S
If you use k3s
, you can modify /etc/systemd/system/k3s.service
to configure apiserver
launch options. For example:
# The ExecStart command should be something like:
ExecStart=/usr/local/bin/k3s \
server \
'--kube-apiserver-arg' \
'oidc-issuer-url=...' \
'--kube-apiserver-arg' \
'oidc-client-id=...' \
'--kube-apiserver-arg' \
'oidc-username-claim=...' \
# Add all options in that string, without the double dash
RBAC
In Kubernetes RBAC, there are two different kinds of rulesets: Role and ClusterRole. Role is used to indicate a set of (additive-only) permissions in a namespace; ClusterRole is a set of permissions for the whole cluster (for example, listing namespaces is in ClusterRole).
To assign one or more Role or ClusterRole to users and groups, Kubernetes uses RoleBinding and ClusterRoleBinding objects.
You can find roles documentation and example in the official Kubernetes documentation for RBAC.
Assuming that you started the server using RBAC, you can assign cluster roles to OIDC groups as follow:
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
namespace: default
name: read-namespaces
roleRef:
kind: ClusterRole
name: list-namespaces
apiGroup: rbac.authorization.k8s.io
subjects:
- kind: Group
name: oidc:Group1
apiGroup: rbac.authorization.k8s.io
Configure oauth2-proxy
Now we need to glue together the Kubernetes dashboard, oauth2-proxy, Keycloak and Kubernetes.
I won’t show how to deploy a Kubernetes Dashboard (the project page on GitHub already contains the deploy guide). I’ll assume that the Dashboard is accessible at a given URL. You can deploy it in a Kubernetes cluster: in this case, I suggest you to deploy oauth2-proxy as a sidecar container and expose the plaintext Kubernetes Dashboard only to localhost
(for oauth2-proxy).
I’ll assume that you have some reverse proxy or ingress which terminates TLS (e.g. traefik).
As oauth2-proxy
doesn’t support passing the Authorization
header when the Keycloak provider is used (this is a known bug), we will configure oauth2-proxy
using options for a generic OIDC provider.
These are the environment variables needed:
# oauth2-proxy uses cookies to store information about the user. You can skip this if you configure Redis
OAUTH2_PROXY_COOKIE_DOMAIN=oauth2-domain
OAUTH2_PROXY_COOKIE_SECRET=# Generate secret using the command from the oauth2-proxy documentation
# Skip the "Login with" button, go directly to Keycloak
OAUTH2_PROXY_SKIP_PROVIDER_BUTTON=true
# Pass the authorization header to the Kubernetes dashboard
OAUTH2_PROXY_PASS_AUTHORIZATION_HEADER=true
# Listen to wildcard address
OAUTH2_PROXY_HTTP_ADDRESS=0.0.0.0:4180
# Accept all emails (needed)
OAUTH2_PROXY_EMAIL_DOMAINS=*
# Provider configuration
OAUTH2_PROXY_PROVIDER=oidc
OAUTH2_PROXY_OIDC_ISSUER_URL=https://keycloak-server/auth/realms/test1
OAUTH2_PROXY_CLIENT_ID=clientid
OAUTH2_PROXY_CLIENT_SECRET=clientsecret
OAUTH2_PROXY_ALLOWED_GROUPS=group1
# If the oauth2-proxy can't detect the correct URL (most common in some reverse proxy config or docker-compose)
# you can use this option to enforce the callback URL
OAUTH2_PROXY_REDIRECT_URL=https://oauth2-host/oauth2/callback
# Upstream (for reverse proxy)
OAUTH2_PROXY_UPSTREAMS=https://kubernetes-dashboard-url/
# Use this if your kubernetes dashboard is exposing self-signed HTTPS certificates
OAUTH2_PROXY_SSL_UPSTREAM_INSECURE_SKIP_VERIFY=true
Conclusions
That’s all, folks! Now you can expose the oauth2-proxy
to a public endpoint (use TLS), and your Keycloak users (within allowed groups) will be able to access the cluster using the permissions you added to the cluster in their role for groups they belong.