Use Kustomize for Kubernetes Deployments

Updated 1 week ago by Michael Cretzman

Harness supports Kustomize kustomizations in your Kubernetes deployments. You can use overlays, multibase, plugins, sealed secrets, etc, just as you would in any native kustomization.

New to Kustomize? In a nutshell, kustomizations let you create specific Kubernetes deployments while leaving the original manifests untouched. You drop a kustomization.yaml file next to your Kubernetes YAML files and it defines new behavior to be performed during deployment.
Please review the video Kustomize: Deploy Your App with Template Free YAML (30min) and the Kustomize glossary.

In this topic:

Before You Begin

Kustomize is supported in Harness Kubernetes v2 Services only. This is the default type, but some Harness users might be using a legacy Kubernetes v1 Service.

Visual Summary

The following diagram shows a very simple topology for implementing Kustomize.

The Harness Kubernetes Delegate runs in the target cluster with Kustomize pre-installed. The Delegate obtains kustomization.yaml and resource files from a Git repo. The Delegate deploys the Kubernetes objects declared using Kustomize in the target pods.

In this diagram we use Google GCP, but Harness deploys to any Kubernetes cluster vendor.

Video Summary


Currently, Harness support for Kustomize has the following limitations:

  • Harness variables and secrets are not supported.
  • Harness artifacts are not supported, as described in Review: Artifact Sources and Kustomization.
  • Harness does not use Kustomize for rollback. Harness renders the templates using Kustomize and then passes them onto kubectl. A rollback works exactly as it does for native Kubernetes.

Review: Kustomize and Harness Delegates

All Harness Delegates include kustomize by default. There is no installation required.

Your Delegate hosts, typically a pod in the target cluster, require outbound HTTPS/SSH connectivity to Harness and your Git repo.

The remainder of this topic assumes you have a running Harness Delegate and Cloud Provider connection. For details on setting those up, see Connect to Your Target Kubernetes Platform.

Step 1: Connect to Your Kustomize Repo

You add a connection to the repo containing your kustomize and resource files as a Harness Source Repo Provider.

For details on adding a Source Repro Provider, see Add Source Repo Providers.

Here is a quick summary:

  1. In Harness, click Setup, and then Connectors.
  2. Click Source Repo Providers, and then click Add Source Repo Provider.
  3. Provide the following settings and click Submit:

  • Display Name: You will use this name to select the repo in your Harness Service.
  • URL: Provide the Git repo URL.
  • Username/password: Enter your Git credentials.
  • Branch: Enter the name of the branch you want to use, such as master.

Now you have a connection to your kustomize and resource files. Next, you can identify these files as Remote Manifests in a Harness Service.

The following steps assume you have created a Harness Service for your Kubernetes deployment. For details, see Create the Harness Kubernetes Service.

Review: Artifact Sources and Kustomization

Typically, Harness Services are configured with an Artifact Source. This is the container image or other artifact that Harness will deploy. For Kustomize, you do not specify an Artifact Source in your Harness Service.

The artifact you want to deploy must be specified in a spec (for example, deployment.yaml). If the image is in the public Docker hub repo, you can just list its name:

- name: app
image: pseudo/your-image:latest

If your image is hosted in a private Docker hub repo, you need to specify an imagePullSecrets in the spec field:

- name: app
image: pseudo/your-image:latest
- name: dockerhub-credential

Step 2: Add Manifests and Kustomization

  1. In your Harness Service, in Manifests, click Link Remote Manifests.

  1. In Remote Manifests, in Manifest Format, click Kustomization Configuration.
  2. Enter the following settings and click Submit.

  • Source Repository: Select the Source Repo Provider connection to your repo.
  • Commit ID: Select Latest from Branch or Specific Commit ID. Do one of the following:
    • Branch: Enter the branch name, such as master.
    • Commit ID: Enter the Git commit ID.
  • Path to kustomization directory: This setting is discussed below.
  • Path to Kustomize plugin on delegate: Enter the path to the plugin installed on the Delegate. This setting and using plugins are discussed later in this topic.

Once you have set up Kustomization Configuration, you can use the Service in a Harness Workflow. There are no other Kustomize-specific settings to configure in Harness.

Path to Kustomization Directory

You can manually enter the file path to your kustomization root: The directory that contains a kustomization.yaml file in your repo. You do not need to enter the filename.

If you are using overlays, enter the path to the overlay kustomization.yaml.

As explained below, you can use Harness variable expressions in Path to kustomization directory to dynamically select bases for overlays.

Skip Versioning for Service

By default, Harness versions ConfigMaps and Secrets deployed into Kubernetes clusters. In some cases, you might want to skip versioning.

Typically, to skip versioning in your deployments, you add the annotation to your manifests. See Deploy Manifests Separately using Apply Step.

In some cases, such as when using public manifests or Helm charts, you cannot add the annotation. Or you might have 100 manifests and you only want to skip versioning for 50 of them. Adding the annotation to 50 manifests is time-consuming.

Instead, enable the Skip Versioning for Service option in Remote Manifests.

When you enable Skip Versioning for Service, Harness will not perform versioning of ConfigMaps and Secrets for the Service.

If you have enabled Skip Versioning for Service for a few deployments and then disable it, Harness will start versioning ConfigMaps and Secrets.

Option 1: Overlays and Multibases using Variable Expressions

An overlay is a kustomization that depends on another kustomization, creating variants of the common base. In simple terms, overlays change pieces of the base kustomization.yaml. These are commonly used in patches.

A multibase is a type of overlay where copies of the base use the base but make additions, like adding a namespace.yaml. Basically, you are declaring that the overlays aren't just changing pieces of the base, but are new bases.

In both overlays and multibases, the most common example is staging and production variants that use a common base but make changes/additions for their environments. A staging overlay could add a configMap and a production overlay could have a higher replica count and persistent disk.

To execute a staging overlay you would run the following command selecting the overlay's root:

kubectl apply -f $DEMO_HOME/overlays/staging

To deploy each overlay in Harness, you could create a Service for each overlay and configure the Path to kustomization directory setting in Remote Manifests to point to the overlay root:

A better method is to use a single Service for all bases and manually or dynamically identify which base to use at deployment runtime.

You can accomplish this using Harness Variable Expressions in Path to kustomization directory

Environment Name Variables

Using Environment name variables is the simplest method of using one Service and selecting from multiple bases.

First, in your repo, create separate folders for each environment's kustomization.yaml. Here we have folders for dev, production, and staging:

The kustomization.yaml file in the root will reference these folders of course:

- dev
- staging
- production

We are only concerned with staging and production in this example.

Next, mirror the repo folder names in Harness Environment names. Here we have two Environments named production and staging for the corresponding repo folders named production and staging.

Next, use the built-in Harness variable expression ${} in Path to kustomization directory to use the Environment names. The ${} expression resolves to the name of the Harness Environment used by a Workflow.

For example, if you have two Environments named production and staging, at deployment runtime the ${} expression resolves to whichever Environment is used by the Workflow.

Now, to use the ${} expression in Path to kustomization directory, and reference the Environments and corresponding folders, you would enter kustomize/multibases/${}.

Each time a Workflow runs, it will replace the ${} expression with the name of the Environment selected for the Workflow.

For example, if the Workflow uses the Environment production, the Path to kustomization directory setting will become kustomize/multibases/production. Now Harness looks in the production folder in your repo for the kustomization.yaml file.

Once you have created a Workflow, you can templatize its Service setting so that you can select the Environment and its corresponding repo folder:

You can also select the Environment in a Trigger than executes the Workflow:

For more information, see Triggers and Passing Variables into Workflows and Pipelines from Triggers.

Service Variables

You can also use Service variables in Path to kustomization directory. This allows you to templatize the Path to kustomization directory setting and overwrite it at the Harness Environment level. Let's look at an example.

Here is an example of using a Service variable in Path to kustomization directory:

If you have Service Config Variables set up, you will see the variable expressions displayed when you enter $. For details on Service variables, see Services.

Service variables can be overwritten at the Harness Environment level. This allows you to use a variable for the Path to kustomization directory setting and then override it for each Harness Environment you use with this Service.

For example, if you have two Environments, staging and production, you can supply different values in each Environment for Path to kustomization directory.

For details on overriding Service settings, see Override Harness Kubernetes Service Settings.

Workflow Variables

For Workflow variables, you need to create the variable in the Workflow and then enter the variable name manually in Path to kustomization directory, following the format ${workflow.variable.variable_name}.

Here is an example of using a Workflow variable for Path to kustomization directory:

If you use Workflow variables for Path to kustomization directory, you can provide a value for Path to kustomization directory when you deploy the Workflow (standalone or as part of a Pipeline).

Typically, when you deploy a Workflow, you are prompted to select an artifact for deployment. If a Workflow is deploying a Service that uses a remote Kustomization Configuration, you are not prompted to provide an artifact for deployment.

See Workflows and Kubernetes Workflow Variable Expressions.

Option 2: Use Plugins in Deployments

Kustomize offers a plugin framework to generate and/or transform a kubernetes resource as part of a kustomization.

You can add your plugins to the Harness Delegate(s) and then reference them in the Harness Service you are using for the kustomization.

When Harness deploys, it will apply the plugin you reference just like you would with the --enable_alpha_plugins parameter. See No Security from Kustomize.

Add Plugins to Delegate using a Delegate Profile

To add a plugin to a Delegate, you create a Delegate Profile and apply it to the Delegates.

  1. In Harness, click Setup, and click Harness Delegates.
  2. Click Manage Delegate Profiles, and then click Add Delegate Profile. The Delegate Profile settings appear.
  3. Enter a name and the script for the plugin and click Submit.

For example, here is a ConfigMap generator plugin script:

mkdir -p $MY_PLUGIN_DIR
cat <<'EOF' >$MY_PLUGIN_DIR/SillyConfigMapGenerator
# Skip the config file name argument.
today=`date +%F`
echo "
kind: ConfigMap
apiVersion: v1
name: the-map
today: $today
altGreeting: "$1"
enableRisky: "$2"
cat $MY_PLUGIN_DIR/SillyConfigMapGenerator
chmod +x $MY_PLUGIN_DIR/SillyConfigMapGenerator
readlink -f $MY_PLUGIN_DIR/SillyConfigMapGenerator

Each plugin is added to its own directory, following this convention:


The default value of XDG_CONFIG_HOME is $HOME/.config. See Placement from Kustomize.

In the script example above, you can see that the plugin is added to its own folder following the plugin convention:


Note the location of the plugin because you will use that location in the Harness Service to indicate where the plugin is located (described below).

Plugins can only be applied to Harness Kubernetes Delegates.

Next, apply the Profile to Kubernetes Delegate(s):

  1. Click the Profile menu in the Delegate lists and choose your Profile.

  1. Click Confirm.

Wait a few minutes for the Profile to install the plugin. Next click View Logs to see the output of the Profile.

Select Plugin in Service

Once the plugin is added to the Delegate(s), you can reference it in the Remote Manifests Path to Kustomize plugin on Delegate setting in the Harness Service. You will indicate the same location where your Delegate Profile script installed the plugin:

Click Submit. Harness is now configured to use the plugin when it deploys using kustomize.

Example 1: Multibase Rolling Deployment

For this example, we will deploy the multibases example for Kustomize in a Rolling Update strategy. You can set up a Harness Source Repro Provider to connect to that repo.

We will use Harness Environment names that match the base folder names in the repo.

In the Harness Service, we will use the ${} expression in the Path to kustomization directory setting.

When we deploy, the Workflow will use the name of the Environment in Path to kustomization directory and the corresponding repo folder's kustomization.yaml will be used.

Here is what the repo looks like:

Here are the Harness Environments whose names correspond to the dev, stage, and production repo folders:

Here is the Harness Service Remote Manifests settings. The Path to kustomization directory setting uses the ${} expression that will be replaced with a Harness Environment name at deployment runtime.

Next we'll create a Workflow using the Rolling Deployment strategy. Here we select the Service we set up.

When you first create the Workflow you cannot set the Environment setting as a variable expression. Create the Workflow using any of the Environments, and then edit the Workflow settings and turn the Environment and Infrastructure Definition settings to variable expressions by clicking their [T] icons.

When you are done, the Workflow settings will look like this:

There is nothing to set up in the Workflow. Harness automatically adds the Rollout Deployment step that performs the Kubernetes Rolling Update.

In the Workflow, click Deploy. In Start New Deployment, select the name of the Environment that corresponds to the repo folder containing the base you want to use:

In this example, we select the stage Environment. Once deployment is complete you can see the stage repo folder's base used and the staging-myapp-pod created:

Review: What Workloads Can I Deploy?

Harness Canary and Blue/Green Workflow types only support Kubernetes Deployment workloads. Rolling Workflow types support all other workloads, except Jobs. The  Apply Step can deploy any workloads or objects.

In Harness, a workload is a Deployment, StatefulSet, or DaemonSet object deployed and managed to steady state.

Change the Default Path for the Kustomize Binary

The Harness Delegate ships with the 3.5.4 release of Kustomize.

If you want to use a different release of Kustomize, add it to a location on the Delegate, update the following Delegate files, and restart the Delegate.

See Manage Harness Delegates for details on each Delegate type.

Shell Script Delegate

Add kustomizePath: <path> to config-delegate.yml.

Kubernetes Delegate

update the value field in harness-delegate.yaml:

value: "<path>"

Helm Delegate

Add kustomizePath: "<path>" to harness-delegate-values.yaml.

kustomizePath: "<path>"

Docker Delegate

Set the Kustomize path in the

-e KUSTOMIZE_PATH= <path> \

ECS Delegate

Update the following in ecs-task-spec.json:

"value": "<path>"

Next Steps

How did we do?