Kustomize - template-free Kubernetes application management

Zalán Tóth
5 min readJun 27, 2022

--

Managing applications on Kubernetes can be difficult as the number of components are growing. There are many resources like deployments, HPAs, services, secrets, configs, etc.. and those have to be installed on multiple environments.

Deploying everything manually into all environments - like prod, beta, dev… - are challenging but luckly multiple solutions were born to solve this problem. Helm might be the most popular one but it’s learning curve is long as developers have to learn another tool and practice it.

On the other hand Kustomize uses the original - well known - Kubernetes manifests in template-free way. The learning curve is minimal and Kubectl - the defacto k8s standard cli - contains adequate version of it since v1.21.

In this article we are gonna go through the basic funtionality of Kustomize and define the manifests for two applications. They will be deployed into two different environments (local and production) using different characteristrics and configurations.

Suppose we want to deploy the applications into our local environment. Besides the applications we need to provide the whole infrastucture. On the contrary in production there have been preinstalled infra. For example the DB team provisioned database for the application, the network team set up the firewalls... Non of these are done on the local environment.

We are gonna use two prebuilt containers, the api-gateway and the user-service. The following diagram depict the differences between the two environments. In the local we are gonna deploy the gateway application which contains a deployment, hpa, cluster-ip and ingress resources, the user-service and the database system as well (deployment, cluster-ip, persistent-volume..). On the prod only the first two will be installed. The dashed line rectangles bound the individual stacks an their resources.

Local and Production deployments

We are gonna tear apart the deployments. First we create three folders in the deployment directory.

  • base holds the original manifests for all applications
  • overlays contains the patches for the given environments
  • componenets includes the extensions like database

All manifests are put together by the kustomization.yaml files.

Directory structure

I won’t copy all manifests here, but you can find them and the whole project on this link.

Base

Let’s start with the base, after the Kubernetes manifests are done, we create a kustomization.yaml file in the base folder. We can do it manually or using the following command in the base folder:

kustomize create --autodetect --recursive

base/kustomization.yaml

In the resources section we lists all manifests we would like to include into our deployment. The images section describes the used images. It’s important to use the same name as in the deployment.yaml file.

Gateway deployment manifest

Kustomize uses the newName and newTag for replacing the image defined in the name property of the deployment.yaml. We can update the tag or the name using the following command in the base folder:

kustomize edit set image zalerix/kustomize-article-gw:latest=zalerix/kustomize-article-gw:1.1.0

Components

Components are useful if we want to include resources into some environments. In this project we need PostgreSQL in the local environment so we define it as a component. Let’s create all necessery manifests and add them to the kustomization.yaml. Note that the kind of this file is Component. Every components must be its own kustomization manifest.

components/postgres/kustomization.yaml

Overlays

In the overlays folder we can modify the maifests to describe our environment properly. Components, patches and other resources are included here.

We create a new folder under the overlays called local and a kustomization.yaml file in it.

overlays/local/kustomization.yaml

Lets go through this file.

  • commonLabels tag adds the defined labels for every resources.
  • namePrefix updates all names in the base mainfests by appending its value to the beginning of the names. The nameSuffix does the same but at the end of the names.
  • resources includes new resources, selected resources or the whole base folder. The inculed ones have to be either a kustomization file or it has to be a resource mainfest.
  • components binds the given components into the actual environment.
  • patchesStrategicMerge merges the patch files with the base manifests, we are gonna talk about it in detail later.
  • patchesJson6902 overrides a value on the given json path in a manifests.
    We have to define the target which will be the user-service application’s deployment.yaml file. In the file we update the cpu limit field. We can select list elements - like containers - by indices. The user-service container is the first element of the list so the index is 0. The operation can be add, delete or replace.
base/user-service-deployement.yaml
  • Secret and ConfigMap generator creates the resources from property files. With the defined name and we can bind it to the deployment.yaml. The name of the generated ConfigMap and Secret will suffixed with the hash of the file and kustomize overrides the refs automatically in the deployment.yaml manifest.

We can use patch files to override values from the base, it’s cleaner then using JSON path. The patches have to contain only the version, kind and name of the original manifest and the overrided values. For example we define HPA in the base as you can see below.

base/user-service/hpa.yaml

For overriding the maxReplicas fom 3 to 1 in the local environtment, we define the hpa-patch.yaml in the local folder with the following content:

overlays/local/user-service/hpa-patch.yaml

Kustomize will search the resources with the given name and kind and update the value of the maxReplicas from 3 to 1

Next we define patches for the prod and also create kustomization.yaml.

The environments are done. The following commands can be used to deploy the given environments into the cluster. In this tutorial, I’m gonna install them into the same namespace so we can see the differences.

kubectl apply -k .\deployment\overlays\prod

kubectl apply -k .\deployment\overlays\local

And we can print the generated final manifests using dry-run mechanism.

kubectl apply -k .\deployment\overlays\local --dry-run=client -o yaml

After all environments are deployed we should see the following running pods:

Local and Prod pods in the same namespace on the same cluster

The applications accept /GET and /POST requests on the following urls:

http://localhost/local-gateway/user-service/user

http://localhost/prod-gateway/user-service/user

It will return with empty list until we post some users into the system. The post body is the user object:

{
"name": "zlaval"
}

At the end, we can do the clean-up using the delete command.

kubectl delete -k .\deployment\overlays\prod

All codes are available on GitHub.

Finally an example output of the generated and applied manifests using the local environtment:

Result of dry-run

--

--

Zalán Tóth
Zalán Tóth

Written by Zalán Tóth

2xAWS, Go, Cloud, Kubernetes, Spring&Kotlin

No responses yet