Kubernetes is a powerful cluster-wide management system for application containers that has become a standard for container orchestration. It allows developers to manage their deployments, including scaling, and monitoring, in a highly efficient and automated manner. But this flexibility has a price: complexity. There are many concepts to learn, different tools to work with it, and it may be difficult to feel that you really have control over it.
Many articles about Kubernetes present a hybrid mode of operation where resource creations and updates are mostly declarative, but deletions are imperative (or there are no deletions at all). The apply operation in Kubernetes only overwrites the specified resources, so if you rename something and apply the updated configuration, both the old and the new resources will be in the final state. To address this, the community developed a number of tools like Helm and Kapp to make it easier to manage and upgrade a set of resources.
This article explains how to deploy resources to a Kubernetes cluster in a way that creates, updates, and prunes resources so that the remote configuration matches exactly the applied configuration. It also discusses how to deal with Helm charts to include configuration from the community, and encrypt secrets to store them in a Git repository, since versioning the entire configuration is an important part of this workflow.
§Deploying with Kustomize
You can setup a Kubernetes test environment locally with tools like Minikube. Provided Docker is running, you should be able to start a test environment with this command:
$ minikube start
Preparing Kubernetes v1.26.1 on Docker 20.10.23 ...
+ kubelet.localStorageCapacityIsolation=false
Configuring bridge CNI (Container Networking Interface) ...
Verifying Kubernetes components...
+ Using image gcr.io/k8s-minikube/storage-provisioner:v5
Enabled addons: default-storageclass, storage-provisioner
Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default
Next, follow Kubectl's installation instructions. These commands should return some information about the server:
$ kubectl version
Client Version: v1.26.3
Kustomize Version: v4.5.7
Server Version: v1.26.1
$ kubectl cluster-info
Kubernetes control plane is running at https://192.168.49.2:8443
CoreDNS is running at https://192.168.49.2:8443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
You should also create a Git repository to version your configuration files and allow rollbacks:
$ git init infra && cd infra
Initialized empty Git repository in ./infra/.git/
Do not hesitate to commit your changes as you follow along.
§Kustomization
Kubectl can apply a set of configuration files that represent Kubernetes
resources from their path given as argument to the flag -f
:
$ kubectl apply -f deployment.yaml
A component called Kustomize can apply a group of resources listed in a
kustomization.yaml
file. This is the same as bundling these resources into
one big YAML file, where each resource is separated by ---
, and then applying
it with kubectl -f
.
This feature can help organize the configuration files with a directory per
namespace, and inside them, a directory per "application", each having its own
kustomization.yaml
:
$ tree
infra
├── default
│ ├── file-server
│ │ ├── deployment.yaml
│ │ └── kustomization.yaml
│ └── kustomization.yaml
└── kustomization.yaml
Inside the top-level kustomization.yaml
, source the default
namespace
directory as follows:
resources:
- default
Inside the default
directory, source the file-server
application directory:
resources:
- file-server
In the directory default/file-server
, we will just add a basic
Deployment
for darkhttpd
, also sourced from the associated kustomization.yaml
:
resources:
- deployment.yaml
You can then create the test-file-server
Deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
name: test-file-server
spec:
selector:
matchLabels:
app: test-file-server
template:
metadata:
labels:
app: test-file-server
spec:
containers:
- name: file-server
image: alpine/darkhttpd
ports:
- containerPort: 8080
Kustomize is not limited to bundling configuration files. It allows "customization" by applying attributes like namespaces and labels to all the nested resources.
Because the resources in the default
directory should be deployed to the
default
namespace, you can specify a namespace
attribute
in default/kustomization.yaml
that will apply to all nested resources:
namespace: default
resources:
- file-server
You can inspect the configuration after it went through Kustomize, which corresponds to the previous Deployment with the namespace attribute added:
$ kubectl kustomize .
Output:
apiVersion: apps/v1
kind: Deployment
metadata:
name: test-file-server
namespace: default
spec:
selector:
matchLabels:
app: test-file-server
template:
metadata:
labels:
app: test-file-server
spec:
containers:
- image: alpinelinux/darkhttpd:latest
name: file-server
ports:
- containerPort: 8080
§Apply
The next step is to apply this configuration on the server with kubectl apply
. The flag -k
will look for a kustomization.yaml
file in the
directory passed as argument to build the configuration, just like you did
previously with kubectl kustomize
, and apply it on the server:
$ kubectl apply -k .
deployment.apps/test-file-server configured
You should have a test-file-server
pod running:
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
test-file-server-d7bf8b478-xt7mh 1/1 Running 0 14s
You can refer to Kubectl Cheat Sheet § Viewing, finding resources for additional commands that can help you inspect this Deployment. Try to print the server-side Deployment configuration in YAML format and compare it with the local version.
To connect to the file server, create a port forward from the host to the pod:
$ kubectl port-forward test-file-server-d7bf8b478-xt7mh 8080:8080
Forwarding from 127.0.0.1:8080 -> 8080
Forwarding from [::1]:8080 -> 8080
Then you can open the index page at http://127.0.0.1:8080/
in a web browser.
§Delete
Since the file server appears to be working, let's end our experimentation and
rename the Deployment from test-file-server
to file-server
:
apiVersion: apps/v1
kind: Deployment
metadata:
- name: test-file-server
+ name: file-server
spec:
selector:
matchLabels:
- app: test-file-server
+ app: file-server
template:
metadata:
labels:
- app: test-file-server
+ app: file-server
spec:
containers:
- name: file-server
Re-apply the configuration:
$ kubectl apply -k .
deployment.apps/file-server created
Although it isn't mentioned in the previous command output, the old test Deployment isn't cleaned up, so you end up with two Deployments of the file server:
$ kubectl get deployments
NAME READY UP-TO-DATE AVAILABLE AGE
test-file-server 1/1 1 1 18m
file-server 1/1 1 1 5s
By default, kubectl apply
only creates or updates resources. You could
cleanup this Deployment using an imperative kubectl delete
command:
$ kubectl delete deployments test-file-server
deployment.apps "test-file-server" deleted
But this is exactly what we want to avoid. You will see in the next section how you can prune resources declaratively.
§Pruning resources
Removing resources from your local configuration is not enough to remove them from the cluster configuration, and it wouldn't be appropriate for Kubectl to remove everything that is on the server but isn't in the local configuration.
Indeed, some resources are created dynamically by services running inside your Kubernetes cluster, so you will have to restrict pruning to the resources you actually created. This is done using labels and selectors.
Also, stateful resources like volumes, or the namespaces that contain different kind of resources, shouldn't be pruned without careful consideration. For that reason, Kubectl also requires us to be explicit about the type of resources we want to prune.
The allowlist prune feature built into Kubectl since Kubernetes v1.5 (2016-12-13) doesn't work with server-side applies and has some correctness issues, which explains why it never got out of the experimental stage.
Recently, it underwent a complete redesign tracked in KEP 3659, which introduces a new standard called ApplySet that records in a Secret the set of applied Kubernetes objects to provide fast and correct pruning.
The alpha implementation landed in Kubernetes v1.27 (2023-04-11). Once stabilized, I will update this section accordingly.
§Labels and selectors
Kubernetes supports labels, which are key/value pairs attached to resources.
You can use selectors to find a subset of resources with a given label. Do not
confuse them with annotations which aren't indexed (they cannot be selected
efficiently). Your previous Deployment of the file server uses labels
internally, since it only targets pods that have the label file-server
.
To identify all the resources you deployed through Kustomize, and to scope all Kubectl's operations on them, you can add a dedicated label:
commonLabels:
dzx.fr/managed-by: kustomize
resources:
- default
Using a fully-qualified domain name that you own is a simple way to avoid
conflicts with
common
or third-party labels. Try to print the final configuration with kubectl kustomize .
to check that the label is applied properly.
The prune command in Kubectl is similar to the apply command, but it has an
additional --prune
option to remove Kubernetes resources that are no longer
defined in the local configuration. To use this option, you have to provide a
label selector to the option -l
, which specify the resources you want to
prune.
For example, if you pass -l dzx.fr/managed-by=kustomize
, the command will
apply and prune the resources that have the label
dzx.fr/managed-by=kustomize
, leaving anything else untouched.
At the end of a CI pipeline, it is useful to select only the resources that are part of this application to ensure the deployment doesn't affect other resources. To that end, you can also add a label for the application name:
commonLabels:
dzx.fr/name: file-server
resources:
- deployment.yaml
A selector can contain comma-separated values to select multiple labels for the
apply command. For instance, -l dzx.fr/managed-by=kustomize,dzx.fr/app=echo-server
will only apply to
resources that are both labelled with dzx.fr/managed-by=kustomize
and
dzx.fr/app=echo-server
. Kubernetes provides multiple ways to select
labels.
If you try to apply this configuration, you will likely encounter the following error:
The Deployment "file-server" is invalid: spec.selector: field is immutable
A Deployment manages a set of pods according to the matchLabels
attribute. Being able to edit these labels while the Deployment is active
could result in orphaned pods that do not match the selector anymore (the
only safe action is to remove a label).
The only way update these immutable fields is to recreate the Deployment, which leaves two options:
- Delete the Deployment with
kubectl delete
and apply the configuration to recreate it. - Pass
--force
tokubectl apply
to let Kubernetes recreate it (you cannot use--force
with--prune
).
Recreating the Deployment ensures the associated pods get cleaned up and recreated with the appropriate labels.
§Resource types
At the time of writing, it is still possible to prune some resources without
specifying their type, but this behavior is deprecated. The apply command wants
you to be explicit about the resource types you want to prune with the option
--prune-allowlist
, for example, --prune-allowlist=apps/v1/Deployment
to
target Deployments.
You can try to rename the file server Deployment as we did previously, and run the following command to see that the previous version gets pruned:
$ kubectl apply -k . --prune --prune-allowlist=apps/v1/Deployment -l dzx.fr/managed-by=kustomize
deployment.apps/file-server created
deployment.apps/test-file-server pruned
Supplying the allowed resource types by hand is enough for simple configurations, but it would be better if we could generate this list procedurally, especially since in addition to the builtin resources, Kubernetes supports the definition of custom resources. For example, this extension mechanism is used by Traefik and Cert Manager.
Collecting them directly from the configuration files wouldn't always work,
because if you remove all the resources of some type, they wouldn't get pruned
the next time you apply this configuration. Kubectl provides a command to query
the server for the list of all the supported resource types, from which you can
filter the resources that can be pruned (they must support the verb delete
):
$ kubectl api-resources -o name --verbs=delete
NAME SHORTNAMES APIVERSION NAMESPACED KIND VERBS
configmaps cm v1 true ConfigMap create,delete,deletecollection,get,list,patch,update,watch
endpoints ep v1 true Endpoints create,delete,deletecollection,get,list,patch,update,watch
namespaces ns v1 false Namespace create,delete,get,list,patch,update,watch
deployments deploy apps/v1 true Deployment create,delete,deletecollection,get,list,patch,update,watch
...
The main issue with this command is that it doesn't list the resource types
according to the strict format required by the option
--prune-allowlist=<group/version/kind>
: you have to join the APIVERSION
column with the KIND
column.
A further tweak is required for the core resources that have APIVERSION
equal
to v1
. If you pass --prune-allowlist=v1/ConfigMap
, you would get an error
message:
error: invalid GroupVersionKind format: v1/ConfigMap, please follow <group/version/kind>
Indeed, the core
group must be specified for these resources, as in
core/v1/ConfigMap
. If you need a quick and dirty solution, the following
shell command generates the list of --prune-allowlist
options for all the
resource types supported by the server:
$ kubectl api-resources -o wide > /tmp/api-resources; \
grep 'delete' /tmp/api-resources \
| cut -c$(grep -b -o APIVERSION /tmp/api-resources | awk -F ':' '{ print $1 }')- \
| awk '{ print $1"/"$3 }' \
| sed 's#^v1/#core/v1/#' \
| xargs -I {} echo '--prune-allowlist={}' \
| xargs
--prune-allowlist=core/v1/ConfigMap --prune-allowlist=core/v1/Endpoints ...
For a more robust solution, you could rely on Kubernetes' Go client library to generate the list of prunable resource names:
func listPrunableResourceNames(clientset *kubernetes.Clientset) ([]string, error) {
groups, err := clientset.DiscoveryClient.ServerPreferredResources()
if err != nil {
return nil, err
}
names := []string{}
for _, group := range groups {
groupVersion := group.GroupVersion
if groupVersion == "v1" {
groupVersion = "core/v1"
}
for _, res := range group.APIResources {
if slices.Contains(res.Verbs, "delete") {
names = append(names, groupVersion+"/"+res.Kind)
}
}
}
return names, nil
}
To keep the commands short in the remaining code snippets, I will only specify the resource types that we use.
§Diffs
Even with a selector and an explicit allowlist, pruning server resources shouldn't be taken lightly. Versioning your configuration files provides a way to review changes locally, but a modification that looks benign may have unintended effects on the generated configuration.
You could automate the pruning step on a subset of non-stateful resources, and
carefully prune the remaining resources. You can use the following command to
generate a diff between the server and the client configurations that shows the
updates and deletions that would be performed by kubectl apply
:
$ kubectl diff -k . --prune --prune-allowlist=apps/v1/Deployment -l dzx.fr/managed-by=kustomize
If you add an annotation on the file server Deployment:
kind: Deployment
metadata:
name: file-server
+ annotations:
+ server: darkhttpd
spec:
selector:
matchLabels:
Kubectl should generate the following diff:
diff -u -N /tmp/LIVE-2742216252/apps.v1.Deployment.default.file-server /tmp/MERGED-2647145719/apps.v1.Deployment.default.file-server
--- /tmp/LIVE-2742216252/apps.v1.Deployment.default.file-server 2023-04-24 01:44:47.492123274 +0200
+++ /tmp/MERGED-2647145719/apps.v1.Deployment.default.file-server 2023-04-24 01:44:47.492123274 +0200
@@ -5,8 +5,9 @@
deployment.kubernetes.io/revision: "1"
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"apps/v1","kind":"Deployment","metadata":{"annotations":{},"labels":{"dzx.fr/managed-by":"kustomize","dzx.fr/name":"file-server"},"name":"file-server","namespace":"default"},"spec":{"selector":{"matchLabels":{"app":"file-server","dzx.fr/managed-by":"kustomize","dzx.fr/name":"file-server"}},"template":{"metadata":{"labels":{"app":"file-server","dzx.fr/managed-by":"kustomize","dzx.fr/name":"file-server"}},"spec":{"containers":[{"image":"alpinelinux/darkhttpd","name":"file-server","ports":[{"containerPort":8080}]]}}}}}
+ server: darkhttpd
creationTimestamp: "2023-04-23T23:39:52Z"
- generation: 1
+ generation: 2
labels:
dzx.fr/managed-by: kustomize
dzx.fr/name: file-server
The drawback of this command is that the final configuration is generated
implicitly for both the diff and the apply commands. Although unlikely, the
source files may change in-between. As a solution, you could generate the
configuration first with kubectl kustomize
, review it with kubectl diff
,
and then apply it:
$ KCONF="$(mktemp)"
$ kubectl kustomize . > "$KCONF"
$ kubectl diff -f "$KCONF" --prune --prune-allowlist=apps/v1/Deployment -l dzx.fr/managed-by=kustomize
$ kubectl apply -f "$KCONF" --prune --prune-allowlist=apps/v1/Deployment -l dzx.fr/managed-by=kustomize
$ rm "$KCONF"
That ensures the configuration that gets applied is exactly what you reviewed.
§Handling Helm charts
The leading tool to deploy complex applications on Kubernetes is Helm. A Secret records the state of each release. On upgrade, the new configuration is compared to the previous release, and only the components that changed are updated.
If you modify a resource controlled by Helm, the changes won't be reverted by subsequent upgrades unless you also modify the source configuration for that specific resource. This is because Helm upgrades depend on the recorded state instead of the actual state, which prevents it from working in a truly declarative way.
Fortunately, we can turn Helm charts into plain Kubernetes configuration files, that we can version in our repository and deploy with Kubectl. In this section, you will see how to configure and deploy the Helm chart for Traefik, a cloud native reverse proxy.
§Setup
Follow Helm's installation instructions. On most Linux distributions, it should be available from your package manager:
$ pacman -S helm
Helm charts are commonly distributed as part of a repository. You will have to add the Traefik Labs repository:
$ helm repo add traefik https://helm.traefik.io/traefik
"traefik" has been added to your repositories
$ helm repo update
Hang tight while we grab the latest from your chart repositories...
...Successfully got an update from the "traefik" chart repository
Update Complete. ⎈Happy Helming!⎈
Then, you can list the charts available for install:
$ helm search repo traefik
NAME CHART VERSION APP VERSION DESCRIPTION
traefik/traefik 22.1.0 v2.9.10 A Traefik based Kubernetes ingress controller
traefik/traefik-mesh 4.1.1 v1.4.8 Traefik Mesh - Simpler Service Mesh
traefik/traefikee 1.9.1 v2.9.2 Traefik Enterprise is a unified cloud-native ne...
traefik/hub-agent 1.5.7 v1.4.2 Traefik Hub is an all-in-one global networking ...
traefik/maesh 2.1.2 v1.3.2 Maesh - Simpler Service Mesh
In the next section, you see how to add traefik/traefik
to your
configuration.
§Configuration
Let's create a directory for the deployment of Traefik:
$ mkdir default/traefik
Source this directly from the default namespace kustomization.yaml
:
namespace: default
resources:
- file-server
- traefik
Then, create the associated kustomization.yaml
file:
commonLabels:
dzx.fr/name: traefik
resources:
- traefik.yaml
Helm charts rely on the concept of value files. They contain the context that is injected into the resource template files. You can view the default values with the following command:
$ helm show values traefik/traefik
Output:
# Default values for Traefik
image:
repository: traefik
# defaults to appVersion
tag: ""
pullPolicy: IfNotPresent
...
# Configure Traefik static configuration
# Additional arguments to be passed at Traefik's binary
# All available options available on https://docs.traefik.io/reference/static-configuration/cli/
## Use curly braces to pass values: `helm install --set="additionalArguments={--providers.kubernetesingress.ingressclass=traefik-internal,--log.level=DEBUG}"`
additionalArguments: []
# - "--providers.kubernetesingress.ingressclass=traefik-internal"
# - "--log.level=DEBUG"
...
You can override any of these properties by creating your own values.yaml
file, that you can source when generating the chart. For demonstration
purposes, configure new command line arguments to enable Traefik's access log
in JSON format, keeping the User-Agent
and Referer
headers:
additionalArguments:
- --accesslog=true
- --accesslog.format=json
- --accesslog.fields.headers.names.User-Agent=keep
- --accesslog.fields.headers.names.Referer=keep
§Inflation
Just like kubectl kustomize
, you can ask Helm to output a plain configuration
file corresponding to the entire Helm chart (we refer to this process as
"inflating" a Helm chart):
$ helm template [NAME] [CHART] [flags]
For our Traefik deployment, you can use this command to generate
default/traefik/traefik.yaml
configured with the updated values:
$ helm template traefik traefik/traefik \
--namespace default \
--include-crds \
--skip-tests \
-f default/traefik/values.yaml \
> default/traefik/traefik.yaml
You can grep the output to find them in the main Deployment:
$ grep accesslog default/traefik/traefik.yaml
- "--accesslog=true"
- "--accesslog.format=json"
- "--accesslog.fields.headers.names.User-Agent=keep"
- "--accesslog.fields.headers.names.Referer=keep"
There are a few extra arguments that need an explanation:
--namespace: default
ensures the proper namespace is passed to the template, as it can be used in an application-specific beyond the control of Kustomize.--include-crds
includes the Custom Resources Definitions, which are Kubernetes resource extension mechanism.--skip-tests
prevents the creation of test containers that are used by Helm to validate the deployment of a release.-f default/traefik/values.yaml
adds the values to the rendering context.
Helm provide additional features such as hooks that can run when deploying a release. Obviously, this isn't supported outside of Helm. So if you have to use a chart that relies on these features, you may prefer to deploy it through Helm.
Otherwise, upgrading Helm charts in a Kustomize configuration is as simple as
running helm update
, helm template
with the same options described
previously, inspecting the diff, and applying the changes.
If you skim through Kustomize's documentation, you will find that it
supports inflating Helm
charts
behind the flag --enable-helm
:
helmCharts:
- name: minecraft
includeCRDs: false
valuesInline:
minecraftServer:
eula: true
difficulty: hard
rcon:
enabled: true
releaseName: moria
version: 3.1.3
repo: https://itzg.github.io/minecraft-server-charts
The documentation states that these options were designed for experimentation, not for production. Here are a few reasons:
-
Such configuration doesn't capture the state in a fully declarative way because it depends on remote resources at generation time. The main risk is that you may not be able deploy or rollback if this URL becomes inaccessible.
- When you review changes with Git after changing this configuration, you
cannot inspect what changed in the Helm chart in terms of actual
Kubernetes resources without comparing the output of
kubectl kustomize
before and after. - The chart is pulled every time you generate the config. It is possible to commit it to the repository to avoid repulling every time, but these rules are implicit, and it becomes slightly more complicated to keep the local chart in sync with the specified version.
For all these reasons, whenever I want to upgrade these Helm charts, I prefer to inflate them explicitly and commit the result as a plain YAML configuration file, sourced like a regular Kustomize resource.
- When you review changes with Git after changing this configuration, you
cannot inspect what changed in the Helm chart in terms of actual
Kubernetes resources without comparing the output of
§Keeping secrets with KSOPS
Secrets are designed to hold sensitive data that you shouldn't embed in your Deployments. This section explains how to make them work with a fully declarative configuration while maintaining a reasonable level of security. We will have to ensure that updates to a Secret are propagated to all the Deployments (which isn't the default behavior), then we will see how to store them encrypted in the repository, and finally, we will rely on a plugin to decrypt the Secrets during the apply operation, which isn't natively supported by Kubectl.
§Creation
Create a Secret named password
that contains the typical piece of information
you don't want to serve publicly, like a password:
apiVersion: v1
kind: Secret
metadata:
name: password
data:
password: SG91c3RvbldlSGF2ZUFQcm9ibGVt
Secret data is Base64-encoded. You can use the following command to encode to Base64:
$ echo -n 'Hello, World!' | base64
SGVsbG8sIFdvcmxkIQ==
And decode from Base64 as follows:
$ echo -n 'SGVsbG8sIFdvcmxkIQ==' | base64 -d
Hello, World!
Source the Secret from the file server's kustomization.yaml
:
commonLabels:
dzx.fr/name: file-server
resources:
- deployment.yaml
- secret.yaml
Then, edit the Deployment to mount this Secret as a volume in the webserver's root directory (of course, this isn't something you would typically do, this is only for demonstration purposes):
apiVersion: apps/v1
kind: Deployment
metadata:
name: file-server
spec:
selector:
matchLabels:
app: file-server
template:
metadata:
labels:
app: file-server
spec:
containers:
- name: file-server
image: alpinelinux/darkhttpd
ports:
- containerPort: 8080
volumeMounts:
- name: password
mountPath: /var/www/localhost/htdocs
readOnly: true
volumes:
- name: password
secret:
secretName: password
optional: true
Apply these changes:
$ kubectl apply -k . --prune --prune-allowlist=core/v1/Secret --prune-allowlist=apps/v1/Deployment -l dzx.fr/managed-by=kustomize
secret/password created
deployment.apps/file-server configured
You can now create a port-forward to the file server:
$ kubectl port-forward deployment/file-server 8080:8080
Forwarding from 127.0.0.1:8080 -> 8080
Forwarding from [::1]:8080 -> 8080
And cURL the content of the secret:
$ curl http://127.0.0.1:8080/password
HoustonWeHaveAProblem
§Fixing updates
Let's try to update the password:
apiVersion: v1
kind: Secret
metadata:
name: password
data:
password: TGV0TWVJbg==
Apply this change:
$ kubectl apply -k . --prune --prune-allowlist=core/v1/Secret --prune-allowlist=apps/v1/Deployment -l dzx.fr/managed-by=kustomize
secret/password configured
deployment.apps/file-server unchanged
Check that the Secret was updated:
$ curl http://127.0.0.1:8080/password
HoustonWeHaveAProblem
And... It didn't work. So much for a declarative deployment. You can try to restart the Deployment:
$ kubectl rollout restart deployment/file-server
deployment.apps/file-server restarted
And now the Secret is up-to-date:
$ curl http://127.0.0.1:8080/password
LetMeIn
Kubernetes only restarts Deployments whose configuration changed. Because the Deployment itself didn't change, but only the content of the Secret it references, Kubernetes didn't restart this Deployment.
To force a restart after changing a Secret, you would have to generate them in a way that also requires a change to the Deployment. For example, you could add the hash of the Secret to its name, so the Deployment must be updated when you reference the new Secret.
There are two ways to achieve that with Kustomize:
-
Annotate the Secret with
kustomize.config.k8s.io/needs-hash: "true"
:default/file-server/secret.yaml apiVersion: v1 kind: Secret metadata: name: password annotations: kustomize.config.k8s.io/needs-hash: "true" data: password: SUJlbGlldmVJQ2FuRmx5
-
Use a Secret generator from a
kustomization.yaml
:default/file-server/kustomization.yaml commonLabels: dzx.fr/name: file-server resources: - deployment.yaml secretGenerator: - name: password literals: - password=IBelieveICanFly
In the output of kubectl kustomize
, you can see that a new Secret is created
based on the hash of its content:
apiVersion: v1
data:
password: SUJlbGlldmVJQ2FuRmx5
kind: Secret
metadata:
labels:
dzx.fr/managed-by: kustomize
dzx.fr/name: file-server
name: password-dh2d97kgg6
namespace: default
type: Opaque
And the Secret name in the Deployment is automatically updated as well:
containers:
- image: alpinelinux/darkhttpd
name: file-server
ports:
- containerPort: 8080
volumeMounts:
- mountPath: /var/www/localhost/htdocs
name: password
readOnly: true
volumes:
- name: password
secret:
optional: true
secretName: password-dh2d97kgg6
Whenever the content of the Secret changes, the hash will change, and Kubernetes will have to restart the Deployment.
Any resource that references another by name, like using ConfigMaps from a Deployment, is subject to the same lazy update behavior.
ConfigMaps are used to pass non-sensitive bits of configuration to a
Deployment, so Kustomize provides a
configMapGenerator
to make them declarative.
Unfortunately, nothing prevents a custom resource from referencing another one by name, and Kustomize won't be able to generate unique identifiers for them. That highlights one limitation of declarative deployments with Kubernetes.
§Encryption
Whether you use plain Secrets or generate them through Kustomize's Secret generator, the content is left unencrypted, which makes them unsuitable for pushing to a shared Git repository. To encrypt these files, we will use SOPS, a CLI tool to manage encrypted files, and Age, a simple alternative to GnuPG.
You can install these programs from your package manager:
$ pacman -S age sops
Creating a key pair with Age is easy:
$ age-keygen -o key.txt
Public key: age1ql3z7hjy54pw3hyww5ayyfg7zqgvc7w3j2elw8zmrj2kg5sfn9aqmcac8p
SOPS looks for Age keys in ~/.config/sops/age/keys.txt
, so move the previously
generate key to this location:
$ mkdir -p ~/.config/sops/age/ && mv key.txt ~/.config/sops/age/keys.txt
Next, you have to indicate which public keys should be able to decrypt the
Secrets you will create in this repository by listing them in a .sops.yaml
configuration file at the root:
creation_rules:
- unencrypted_regex: "^(apiVersion|metadata|kind|type)$"
age: age1ql3z7hjy54pw3hyww5ayyfg7zqgvc7w3j2elw8zmrj2kg5sfn9aqmcac8p
The unencrypted_regex
attribute will make sure to keep the specified fields
unencrypted, so only the sensitive data is encrypted.
To create a new encrypted Secret inside the repository, ensure the EDITOR
environment variable is set and run:
$ sops default/file-server/secret.enc.yaml
It opens a temporary file that contains the cleartext. Replace the content by the password Secret:
apiVersion: v1
kind: Secret
metadata:
name: password
annotations:
kustomize.config.k8s.io/needs-hash: "true"
data:
password: TGV0TWVJbg==
Then, save and close this file. You should try to reopen it with SOPS to ensure
you can decrypt it properly. If you inspect its encrypted content, you will see
that it includes a ciphertext for each public key specified in
creation_rules
.
The next step is to allow Kubectl to decrypt this file during the apply operation. Kubectl doesn't provide a way to manage encrypted Secrets, but it supports plugins such as KSOPS, which builds on SOPS to decrypt Secrets.
The plugin ecosystem is currently experimental, so the following instructions depend on which version of Kustomize you use, of which there are two:
-
The integration into Kubectl, which limits access to some features and requires the use of legacy plugins.
-
The standalone version, that provides a newer "plugin" mechanism called KRM functions.
At some point, the integration should catch up with the standalone version, but the following two sections contain the instructions for each version, choose whichever applies to you.
§Legacy exec plugin (kubectl kustomize
)
kubectl kustomize
)Install KSOPS using the legacy installation script that puts the binary to
~/.config/kustomize/plugin/viaduct.ai/v1/ksops/
:
$ export XDG_CONFIG_HOME="$HOME/.config"
$ curl -s https://raw.githubusercontent.com/viaduct-ai/kustomize-sops/master/scripts/install-legacy-ksops-archive.sh | bash
Verify ksops plugin directory exists and is empty
Downloading latest release to ksops plugin path
Successfully installed ksops
To hook KSOPS into Kustomize, you have to create a generator resource that references the encrypted Secret file:
apiVersion: viaduct.ai/v1
kind: ksops
metadata:
name: password-secret-generator
files:
- secret.enc.yaml
Add this generator to the file server Kustomization:
commonLabels:
dzx.fr/name: file-server
resources:
- deployment.yaml
generators:
- secret-gen.yaml
The use of Kubectl plugins is marked as experimental, so it is behind the flag
--enable-alpha-plugins
. This flag cannot be used with the apply operation, so
you will have to apply in two steps, first by generating the configuration with
kubectl kustomize
, and then piping the output into kubectl apply -f -
(where -f -
tells Kubectl to read the configuration from STDIN):
$ kubectl kustomize --enable-alpha-plugins . \
| kubectl apply -f - --prune --prune-allowlist=core/v1/Secret --prune-allowlist=apps/v1/Deployment -l dzx.fr/managed-by=kustomize
§Exec KRM function (standalone kustomize
)
kustomize
)You can install KSOPS by downloading the binary from the
releases and putting
it in your $PATH
(for example, in ~/.local/bin/
).
To hook KSOPS into Kustomize, you have to create a generator resource:
apiVersion: viaduct.ai/v1
kind: ksops
metadata:
name: password-secret-generator
annotations:
config.kubernetes.io/function: |
exec:
path: ksops
files:
- ./secret.enc.yaml
Add this generator to the file server kustomization.yaml
:
commonLabels:
dzx.fr/name: file-server
resources:
- deployment.yaml
generator:
- secret-gen.yaml
The use of Kustomize plugins is marked as experimental, so it is behind the
flag --enable-alpha-plugins
. You also have to allow exec plugins with
--enable-exec
.
These flags cannot be used with the apply operation, so you will have to apply
in two steps, first by generating the configuration with kustomize build
, and
then piping the output into kubectl apply -f -
(where -f -
tells Kubectl to
read the configuration from STDIN):
$ kustomize build --enable-alpha-plugins --enable-exec . \
| kubectl apply -f - --prune --prune-allowlist=core/v1/Secret --prune-allowlist=apps/v1/Deployment -l dzx.fr/managed-by=kustomize
§Conclusion
There are a few challenges to overcome to manage Kubernetes resource in a fully declarative way:
- The pruning mechanism built into Kubectl requires explicit configuration involving labels, selectors, resource types, that isn't straightforward and is still experimental.
- Popular packages rely on third-party tools like Helm that provide other mechanisms to manage Kubernetes resources.
- Changes to resources like Secrets or ConfigMaps do not cause restarts of the Deployments that reference them.
- There are security considerations to versioning sensitive resources in a Git repository.
Kustomize and KSOP provide a way to overcome these issues to provide a first step on the way to GitOps, by allowing the management of Kubernetes resources in declarative way from a Git repository. The next step is to integrate these configuration into a continuous deployment pipeline: you could go a long way with a few Drone CI pipelines, or you could use tools like ArgoCD or Flux that rely on a Kubernetes controller.
This article demonstrated how Kustomize helps bundling multiple resources, how to apply cross-cutting fields to nested resources, and how to use generators. But this is only the tip of iceberg, as its true purpose is to patch resources to provide a template-free last-mile customization layer. A new standard based on KRM functions aims to create an ecosystem of transformations that generalizes the concepts of plugins and generators to make Kustomize even more versatile.