Kubernetes API Server Configuration
Automated Configuration
Kubauth provides a Helm chart that spawns appropriate jobs to perform this configuration in a fully automated manner.
This process assumes that the API Server is managed by the Kubelet as a static pod. If your API Server is managed by another system, such as systemd, you should fall back to the manual configuration.
Additionally, this process assumes a 'standard' folder layout for the Kubernetes installation, such as those used by kind or kubespray. If this is not the case, adjustments can be made by overriding values in the Helm chart. Refer to its values file.
Danger
If you are performing this task on a critical cluster, we strongly recommend reading the Manual Configuration section below. This will help you fully understand what happens under the hood, enabling you to roll back manually in case of problems.
Since several configuration variables are required, we recommend using a values file rather than command-line arguments.
In your working directory, create a file like the following:
values-k8s.yaml
- Replace
kubauth.mycluster.mycompany.comwith your Kubauth entry point issuerCaSecretName: A secret hosting the CA of the issuer URL. In this sample, we use trust-manager, which creates such a secret (herecerts-bundle) in each namespace.issuerCaName: The path of the CA certificate inside the secret.clientIdshould match the name of theOidcClientdeclared previously. Note that aclientSecretvalue is not needed, as the API server will not connect to Kubauth.usernamePrefix: Prefix prepended to username claims to prevent clashes with existing names. A dash value means no prefix.groupsPrefix: Prefix prepended to group claims to prevent clashes with existing names. Cannot be empty for security reasons. The default isoidc:, but in this sample, we set it tooidc-.
Note
This values.yaml file represents the minimum required configuration. Refer to the Helm chart values file for additional variables.
You can now proceed with the configuration using the dedicated Helm chart:
helm -n kubauth upgrade -i kubauth-apiserver --values ./values-k8s.yaml oci://quay.io/kubauth/charts/kubauth-apiserver --version 0.2.0 --create-namespace --wait
Note
This process will take some time, so please be patient.
At the end of this process, a rolling restart is performed on the API Server. If there is only a single instance of this critical pod, you will lose contact with your cluster for a certain period.
This may trigger the restart of many other pods. Wait for your cluster to reach a stable state before proceeding further.
Verifying Installation
If your cluster is up and running, there is a good chance the installation was successful.
For a more thorough verification, inspect the API server parameters:
The output should include the following OIDC parameters:
apiVersion: v1
kind: Pod
metadata:
........
spec:
containers:
- command:
- kube-apiserver
- --oidc-ca-file=/etc/kubernetes/kubauth-kit/ca.crt
- --oidc-groups-prefix=oidc-
- --oidc-groups-claim=groups
- --oidc-username-prefix=-
- --oidc-username-claim=sub
- --oidc-client-id=k8s
- --oidc-issuer-url=https://kubauth.mycluster.mycompany.com
......
If you suspect something went wrong, refer to the manual installation section below to check the configuration.
Also review the API server logs and/or kubelet logs.
Removal
Uninstalling the Helm chart should restore the original API server configuration:
Manual Configuration
The API server configuration for connecting an OIDC provider is described here in the Kubernetes documentation.
Depending on your specific installation, the directories mentioned below may vary. For reference, the clusters used for testing and documentation purposes were built using kind.
Additionally, this procedure assumes that the API Server is managed by the Kubelet as a static pod. If your API Server is managed by another system, such as systemd, you should make the necessary adjustments accordingly.
Note
The following operations must be executed on all nodes hosting an instance of the Kubernetes API server, typically all nodes within the control plane.
These operations require root access on these nodes. Each node must also have kubectl installed and an editor available.
For each node:
-
Log into it:
- Create a folder to store the Kubauth issuer URL certificate (-kitstands for Kubernetes Integration Toolkit): -
Fetch the Kubauth issuer URL certificate and store it in the newly created folder:
-
Edit the
kube-apiserver.yamlmanifest:Make the following modifications:
spec: containers: - command: - kube-apiserver - "--oidc-ca-file=/etc/kubernetes/kubauth-kit/ca.crt" - "--oidc-groups-prefix=oidc-" - --oidc-groups-claim=groups - "--oidc-username-prefix=-" - --oidc-username-claim=sub - --oidc-client-id=k8s - "--oidc-issuer-url=https://kubauth.mycluster.mycompany.com" .......... .......... volumeMounts: - mountPath: /etc/kubernetes/kubauth-kit name: kubauth-kit-config .......... .......... volumes: - hostPath: path: /etc/kubernetes/kubauth-kit type: "" name: kubauth-kit-config ..........- Quotes (
") are important. - In
--oidc-issuer-url, replacekubauth.mycluster.mycompany.comwith your Kubauth entry point. --oidc-client-idshould match the name of theOidcClientdeclared previously. Note that aclientSecretvalue is not needed, as the API server will not connect to Kubauth.--oidc-username-prefix: Prefix prepended to username claims to prevent clashes with existing names. A dash value means no prefix.--oidc-groups-prefix: Prefix prepended to group claims to prevent clashes with existing names. Cannot be empty for security reasons. The default isoidc:, but in this sample, we set it tooidc-.- The
volumeMountsandvolumessections allow the certificate to be accessible from inside the container.
- Quotes (
-
Modifying the
/etc/kubernetes/manifests/kube-apiserver.yamlfile will trigger a restart of the API server pod.