Kubernetes Deployments Version 2

Updated 4 days ago by Michael Cretzman

This guide covers new Harness Kubernetes Deployment Version 2 features. For Version 1 Kubernetes and Helm deployment features, see Helm Deployments.

This guide will walk you through deploying a Docker image to a Kubernetes cluster using the Harness Kubernetes Deployment Version 2 features. This deployment scenario is very popular and a walkthrough of all the steps involved will help you set up this scenario in Harness for your own microservices and apps.

To see a summary of the changes in Harness Kubernetes Deployment Version 2, see Summary of Changes in Kubernetes Deployments Version 2.

Jump to Setup and Deployments

If you already have your Harness Delegate, Artifact Server, and Cloud Provider set up, click the following links to go straight to the Application setup and deployment steps in this guide:

Deployment Summary

For a general overview of how Harness works, see Harness Architecture and Application Checklist.

The following list describes the major steps we will cover in this guide:

  1. Install the Harness Kubernetes Delegate in your Kubernetes cluster.
  2. Add Artifact Server. In this guide, we will use a publicly-available Docker image of NGINX.
  3. Add Cloud Provider. This is a connection to your Kubernetes cluster, either standalone or hosted in a cloud platform like GCP.
  4. Create the Harness Application for your Kubernetes CD pipeline. The Harness Application represents a group of microservices, their deployment pipelines, and all the building blocks for those pipelines. Harness represents your microservice using a logical group of one or more entities: Services, Environments, Workflows, Pipelines, Triggers, and Infrastructure Provisioners. Applications organize all of the entities and configurations in Harness CI/CD.
  5. Create the Harness Service using the Kubernetes type.
    1. Add your Kubernetes manifests and any config variables and files.
  6. Create the Harness Environment containing the Service Infrastructure definition of your Kubernetes cluster, and any overrides.
  7. Create the Canary, Blue/Green, or Rollout deployment Harness Workflow.
  8. Deploy the Workflow.
  9. Advanced options not covered in this guide:
    1. Create a Harness Pipeline for your deployment, including Workflows and Approval steps. For more information, see Pipelines.
    2. Create a Harness Trigger to automatically deploy your Workflows or Pipeline according to your criteria. For more information, see Triggers.
    3. Create Harness Infrastructure Provisioners for your deployment environments. For more information, see Infrastructure Provisioners.

What Are We Going to Do?

This guide walks you through deploying a publicly available Docker image of NGINX to a Kubernetes cluster using Harness. Basically, the Harness deployment does the following:

  1. Docker - Pull a Docker image of NGINX from Docker Hub.
  2. Kubernetes - Deploy to a Kubernetes cluster. We will cover Canary, Blue/Green, and Rollout deployments.

It's that simple, at a high level. We will walk through the steps to set up Harness for these deployments in detail.

What Are We Not Going to Do?

This is a simple guide that covers the basics of deploying artifacts to Kubernetes. It does not cover the following:

  • Teach you Kubernetes - This guide assumes you know Kubernetes and have deployed artifacts to a cluster. For a quick review of the Kubernetes concepts that you need for Harness Kubernetes deployments, see Deploy an application from Kubernetes. It's an excellent summary of key concepts.
  • Harness Delegate Installation - We will cover the recommended Delegate setup, but not every method. You can install the Delegate in a pod of your Kubernetes cluster or on any machine that has network access to your Artifact Source (such as Docker Hub) and Cloud Provider. For information, see Delegate Installation and Management, Delegate Server Requirements, and Delegate Connection Requirements.

What Harness Needs Before You Begin

The following are required to deploy to Kubernetes:

We will walk you through the process of setting up Harness with connections to the artifact server and cloud provider, and the Harness Kubernetes Delegate installation is summarized also.

Harness Kubernetes Delegate

The Harness Kubernetes Delegate runs in your target deployment cluster and executes all deployment steps, such the artifact collection and kubectl commands. The Delegate makes outbound connections to the Harness Manager only.

You can install and run the Harness Kubernetes Delegate in any Kubernetes environment, but the permissions needed for connecting Harness to that environment will be different for each environment.

The simplest method is to install the Harness Delegate in your Kubernetes cluster and then set up the Harness Cloud Provider to use the same credentials as the Delegate.

For information on how to install the Delegate in a Kubernetes cluster, see Kubernetes Cluster. For an example installation of the Delegate in a Kubernetes cluster in a Cloud Platform, see Installation Example: Google Cloud Platform.

Here is a quick summary of the steps for installing the Harness Delegate in your Kubernetes cluster:

  1. In Harness, click Setup, and then click Harness Delegates.
  2. Click Download Delegate and then click Kubernetes YAML.
  3. In the Delegate Setup dialog, enter a name for the Delegate, such as doc-example, and click SUBMIT. the YAML file is downloaded to your machine.
  4. Install the Delegate in your cluster. You can copy the YAML file to your cluster any way you choose, but the following steps describe a common method.
    1. In a Terminal, connect to the Kubernetes cluster, and then use the same terminal to navigate to the folder where you downloaded the Harness Delegate YAML file. For example, cd ~/Downloads.
    2. Extract the YAML file: tar -zxvf harness-delegate-kubernetes.tar.gz.
    3. Navigate to the harness-delegate folder that was created:

      cd harness-delegate-kubernetes
    4. Paste the following installation command into the Terminal and press enter:

      kubectl apply -f harness-delegate.yaml

      You will see the following output (this Delegate is named doc-example):

      namespace/harness-delegate created

      clusterrolebinding.rbac.authorization.doc-example/harness-delegate-cluster-admin created

      statefulset.apps/doc-example-lnfzrf created
      Run this command to verify that the Delegate pod was created:

      kubectl get pods -n harness-delegate

      You will see output with the status Pending. The Pending status simply means that the cluster is still loading the pod.
      Wait a few moments for the cluster to finish loading the pod and for the Delegate to connect to Harness Manager.
      In Harness Manager, in the Harness Delegates page, the new Delegate will appear. You can refresh the page if you like.

Connectors and Providers Setup

In this section, we will add a Harness Artifact Server and Cloud Provider to your account.

Permissions

You connect Docker registries and Kubernetes clusters with Harness using the accounts you have with those providers. The following list covers the permissions required for the Docker and Kubernetes components.

  • Docker:
    • Read permissions for the Docker repository - The Docker registry you use as an Artifact Server in Harness must have Read permissions for the Docker repository.
    • List images and tags, and pull images - The user account you use to connect the Docker registry must be able to perform the following operations in the registry: List images and tags, and pull images. If you have a Docker Hub account, you can access the NGINX Docker image we use in this guide.
  • Kubernetes Cluster:
    • Please see Add Cloud Providers to see the permissions required for your Kubernetes cluster or Cloud Platform. For a cluster or provider such as OpenShift, please see Kubernetes Cluster.

For a list of all of the permissions and network requirements for connecting Harness to providers, see Delegate Connection Requirements.

Add the Artifact Server

For this guide, we will use a publicly-available Docker image of NGINX. Harness supports all of the common artifact servers. You can learn about the different Artifact Servers in Add Artifact Servers.

Docker Artifact Server

You can add a Docker repository, such as Docker Hub, as an Artifact Server in Harness. Then, when you create a Harness service, you specify the Artifact Server and artifact(s) to use for deployment.

For this guide, we will be using a publicly available Docker image of NGINX, hosted on Docker Hub at hub.docker.com/_/nginx/. You will need to set up or use an existing Docker Hub account to use Docker Hub as a Harness Artifact Server. To set up a free account with Docker Hub, see Docker Hub.

To specify a Docker repository as an Artifact Server, do the following:

  1. In Harness, click Setup.
  2. Click Connectors. The Connectors page appears.
  3. Click Artifact Servers, and then click Add Artifact Server. The Artifact Servers dialog appears.
  4. In Type, click Docker Registry. The dialog changes for the Docker Registry account.
  5. In Docker Registry URL, enter the URL for the Docker Registry (for Docker Hub, https://registry.hub.docker.com/v2/).
  6. Enter a username and password for the provider (for example, your Docker Hub account).
  7. Click SUBMIT. The artifact server is displayed.

Docker Registry Across Multiple Projects

In this document, we perform a simple set up using Docker Registry. Another common artifact server for Kubernetes deployments is GCR (Google Container Registry), also supported by Harness.

An important note about using GCR is that if your GCR and target GKE Kubernetes cluster are in different GCP projects, Kubernetes might not have permission to pull images from the GCR project. For information on using a single GCR Docker registry across projects, see Using single Docker repository with multiple GKE projects from Medium and the Granting users and other projects access to a registry section from Configuring access control by Google.

Add the Cloud Provider

For a Cloud Provider in Harness, you can specify a Kubernetes cluster or a Kubernetes-supporting Cloud platform, such as Google Cloud Platform (GCP) and OpenShift, and then define the deployment environment for Harness to use.

For this guide, we will use a simple connection to the Kubernetes cluster that uses the same credentials as the Harness Delegate installed in the same cluster.

If you do not have a Kubernetes cluster, the default configuration for a Kubernetes cluster in GCP will provide you with what you need for this guide. For information on setting up a Kubernetes cluster on GCP, see Creating a Cluster from Google.

The specs for the Kubernetes cluster you create will depend on the microservices or apps you will deploy to it. To give you guidance on the specs for the Kubernetes cluster machines, here is the node pool created for a Kubernetes cluster in GCP:

For Harness deployments, a Kubernetes cluster requires the following:

  • Credentials for the Kubernetes cluster in order to add it as a Cloud Provider. The simplest method is to use the same credentials as the Harness Delegate installed in the cluster. If you set up GCP as a cloud provider using a GCP user account, that account should also be able to configure the Kubernetes cluster on the cloud provider.
  • The kubectl command-line tool must be configured to communicate with your cluster.
  • A kubeconfig file for the cluster. The kubeconfig file configures access to a cluster. It does not need to be named kubeconfig.

For more information, see Accessing Clusters and Configure Access to Multiple Clusters from Kubernetes.

Kubernetes Cloud Provider

To set up a Kubernetes Cloud platform or cluster as a Harness Cloud Provider, do the following:

  1. In Harness, click Setup.
  2. Click Cloud Providers.
  3. Click Add Cloud Provider. The Cloud Provider dialog opens.
    In this example, we will add a Kubernetes Cluster Cloud Provider, but there are several other provider options. In some cases, you will need to provide access keys in order for the delegate to connect to the provider.
  4. In Type, select Kubernetes Cluster.
  5. In Display Name, enter a name for the Cloud Provider.
  6. Click the option Inherit Cluster Details from selected Delegate to use the credentials of the Delegate you installed in your cluster.
  7. In Delegate Name, select the name of the Delegate installed in your cluster. When you are done, the dialog will look something like this:
  8. Click SUBMIT. The Kubernetes Cluster Cloud Provider is added.

Harness Application Setup

The following procedure creates a Harness Application for a Kubernetes deployment.

An Application in Harness represents a logical group of one or more entities, including Services, Environments, Workflows, Pipelines, Triggers, and Infrastructure Provisioners. Applications organize all of the entities and configurations in Harness CI/CD. For more information, see Application Checklist.

To create the Harness Application, do the following:

  1. In Harness, click Setup.
  2. Click Add Application. The Application dialog appears.
  3. Give your Application a name that describes your microservice or app. For the purposes of this guide, we use the name ExampleApp.
  4. Click SUBMIT. The new Application is added.
  5. Click the Application name to open the Application. The Application entities are displayed.

Kubernetes Services

There are different types of Harness Services for different deployment platforms. The Kubernetes type includes Kubernetes-specific settings.

A Harness Service is different from a Kubernetes service. A Harness Service defines your microservice or application for deployment. A Kubernetes service enables applications running in a Kubernetes cluster to find and communicate with each other, and the outside world. To avoid confusion, a Harness Service is always capitalized in Harness documentation. A Kubernetes service is not.

To add the Kubernetes Service, do the following:

  1. In your new Application, click Services. The Services page appears.
  2. In the Services page, click Add Service. The Service dialog appears.
  3. In Name, enter a name for your Service, such as NGINX K8s.
  4. In Description, enter a description for your service.
  5. In Deployment Type, select Kubernetes.
  6. Click Enable Kubernetes V2 to enable it.
  7. Click SUBMIT. The new Service is displayed.

Next, we will walk through how to set up the Kubernetes manifests and use the Service features.

Add Artifact Sources

An Artifact Source in a Service is the microservice or application artifact you want to deploy. The Artifact Source uses an Artifact Server you set up for your Harness account, as described in Add the Artifact Server.

In this guide, we use a publicly-available Docker image of NGINX as our Artifact Source for deployment.

To add an Artifact Source to this Service, do the following:

  1. In the new Service, click AddArtifact Source, and select Docker Registry. There are a number of artifact sources you can use to add a Docker image. For more information, see Add a Docker Image Server. The Docker Registry dialog appears.
  2. In Name, let Harness generate a name for the source.
  3. In Source Server, select the Artifact Server you added earlier in this guide. We are using an Artifact Server connected to Docker Hub in this guide.
  4. In Docker Image Name, enter the image name. Official images in public repos such as Docker Hub need the label library. For example, library/nginx. For this guide, we will use Docker Hub and the publicly available NGINX at library/nginx.
  5. Click SUBMIT. The Artifact Source is added.

You can add multiple Artifact Sources to a Service and view the build history for each one by clicking Artifact History.

Now that we have our Docker image artifact, we can add the Kubernetes manifests for our Service.

Add Manifests

The Manifests section of Service contains the configuration files that describe the desired state of your application in terms of Kubernetes API object descriptions.

All files in Manifests must have the .yaml file extension.

What Can I Add in Manifests?

You can add any Kubernetes configurations files, formatted in YAML, such as object descriptions, in one or more files. For example, the Manifest section has the following default files:

  • values.yaml - This file contains the data for templated files in Manifests, using the Go text template package. This is described in greater detail below.
The only mandatory file and folder requirement in Manifests is that values.yaml is located at the directory root. The values.yaml file is required if you want to use Go templating. It must be named values.yaml and it must be in the directory root.
  • spec.yaml - This manifest contains two API object descriptions, ConfigMap and Deployment. These are standard descriptions that use variables in the values.yaml file. In the Canary and Rolling Update deployments in this guide, we simply deploy the NGINX artifact using these descriptions as is.
Manifest files added in Manifests are freeform. You can add your API object descriptions in any order and Harness will deploy them in the correct order at runtime.

You cannot add binary or any non-Cleartext in the manifest files. For encrypted values, Harness stores them in its (or your) KMS vault and then sends them to Kubernetes via the Kubernetes API, as you will see later in this document.

Go Templating and Harness Variables in Configuration Files

You can use Go templating and Harness built-in variables in combination in your Manifests files.

First, let's look at the values.yaml file to see the variables we will use in our configuration files, with comments showing how they can be referenced:

# This will be used as {{.Values.name}}
name: harness-example

# This will be used as {{int .Values.replicas}}
replicas: 1

# This will be used as {{.Values.image}}
image: ${artifact.metadata.image}

The variable ${artifact.metadata.image} is a Harness variable for referencing the metadata of the Artifact Source. For more information about Harness variables, see Variables and Expressions in Harness.

Now, let's look at the default object descriptions to understand how easy it is to use Kubernetes in Harness.

apiVersion: v1 # for versions before 1.9.0 use apps/v1beta2
kind: ConfigMap # store non-confidential data in key-value pairs
metadata:
name: {{.Values.name}}-config # name is taken from values.yaml
data:
key: value # example key-value pair
---
apiVersion: apps/v1
kind: Deployment # describe the desired state of the cluster
metadata:
name: {{.Values.name}}-deployment # name is taken from values.yaml
spec:
replicas: {{int .Values.replicas}} # tells deployment to run pods matching the template
selector:
matchLabels:
app: {{.Values.name}} # name is taken from values.yaml
template:
metadata:
labels:
app: {{.Values.name}} # name is taken from values.yaml
spec:
containers:
- name: {{.Values.name}} # name is taken from values.yaml
image: {{.Values.image}} # image is taken from values.yaml
envFrom:
- configMapRef:
name: {{.Values.name}}-config # name is taken from values.yaml
ports:
- containerPort: 80

Let's look at some more examples:

Example 1: Using Docker Image from Harness Artifact Stream

The Docker image name is available in the Harness variable: ${artifact.metadata.image}.

In the values.yaml file, it will look like this:

image: ${artifact.metadata.image}

In a manifest file, it will be used like this:

apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx
spec:
selector:
matchLabels:
app: nginx
replicas: 1
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: {{.Values.image}} # applying ${artifact.metadata.image}
ports:
- containerPort: 80
Example 2: Creating ImagePullSecret using Harness Artifact Stream

The DockerConfig for docker repository (kubectl get secret regcred --output=yaml) is available in Harness variable: ${artifact.source.dockerconfig}. For more information, see Inspecting the Secret regcred from Kubernetes.

In the values.yaml file, it will look like this:

image: ${artifact.metadata.image}
dockercfg: ${artifact.source.dockerconfig}

In a manifest file, it will be used like this:

apiVersion: v1
kind: Secret
metadata:
name: docker-regcred
annotations:
harness.io/skip-versioning: true
data:
.dockercfg: {{.Values.dockercfg}} # applying ${artifact.source.dockerconfig}
type: kubernetes.io/dockercfg
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx
spec:
selector:
matchLabels:
app: nginx
replicas: 1
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: {{.Values.image}}
ports:
- containerPort: 80
imagePullSecrets:
- name: docker-regcred
If you are using anonymous access to a Docker registry (public Docker, Nexus, Artifactory), then imagePullSecrets should be removed from the container specification. This is standard Kubernetes behavior and not related to Harness specifically.
Example 3: Using Harness Variables in Manifests

Harness built-in variables can be used in values.yaml file, and are evaluated at runtime. For a list of Harness variables, see Variables and Expressions in Harness.

In the values.yaml file, it will look like this:

name: ${serviceVariable.serviceName}

In a manifest file, it will be used like this:

apiVersion: apps/v1
kind: Deployment
metadata:
name: {{.Values.name}} # ${serviceVariable.serviceName}
spec:
selector:
matchLabels:
app: nginx
replicas: 1
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:latest
ports:
- containerPort: 80
Example 4: Creating Namespace based on Environment InfraMapping

The Harness variable ${infra.kubernetes.namespace} refers to the namespace entered in the Harness Environment Service Infrastructure settings Namespace field:

You can use ${infra.kubernetes.namespace} in your Harness Service Manifests definition of a Kubernetes Namespace to reference the name you entered in the Service Infrastructure Namespace field. When the Harness Service is deployed to that Service Infrastructure, it will create a Kubernetes namespace using the value you entered in the Service Infrastructure Namespace field.

In the values.yaml file, it will look like this:

namespace: ${infra.kubernetes.namespace}

In a manifest file for the Kubernetes Namespace object, it will be used like this:

apiVersion: v1
kind: Namespace
metadata:
name: {{.Values.namespace}}

When this manifest is used by Harness to deploy a Kubernetes Namespace object, it will replace ${infra.kubernetes.namespace} with the value entered in the Service Infrastructure Namespace field, creating a Kubernetes Namespace object using the name. Next, Harness will deploy the other Kubernetes objects to that namespace.

Can I Change the Default Folder Structure?

Yes. There is no significance to the default folder structure. You can use the folders and files just as you would in any Kubernetes folder or file structure.

The only mandatory file and folder requirement is that values.yaml is located at the directory root. The values.yaml file is required if you want to use Go templating.
Manifest Versioning

API objects described in the Manifests section are versioned by Harness. For example, let's say you have four object descriptions in a manifest file:

apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
...
apiVersion: v1
kind: Secret
metadata:
name: image-pull-secret
annotations:
harness.io/skip-versioning: true
...
apiVersion: v1
kind: Namespace
metadata:
name: {{.Values.namespace}}
...
apiVersion: v1
kind: Secret
metadata:
name: sample
stringData:
password: {{.Values.password}}
type: Opaque
...

Each time these objects are deployed, they are versioned by Harness, as you will see in Harness Deployments:

INFO   2019-02-15 10:53:33    Kind                Name                                    Versioned 
INFO 2019-02-15 10:53:33 Namespace default false
INFO 2019-02-15 10:53:33 Secret image-pull-secret false
INFO 2019-02-15 10:53:33 Secret sample true
INFO 2019-02-15 10:53:33 Deployment nginx-deployment false
INFO 2019-02-15 10:53:33
INFO 2019-02-15 10:53:33
INFO 2019-02-15 10:53:33 Current release number is: 5
INFO 2019-02-15 10:53:33
INFO 2019-02-15 10:53:33 Previous Successful Release is 4
INFO 2019-02-15 10:53:33
INFO 2019-02-15 10:53:33 Cleaning up older and failed releases
INFO 2019-02-15 10:53:33
INFO 2019-02-15 10:53:33 Managed Workload is: Deployment/nginx-deployment
INFO 2019-02-15 10:53:33
INFO 2019-02-15 10:53:33 Versioning resources.

Use Local or Remote Configuration Files

You can use local or remote configuration files for the Service. Local and remote configuration files have the following features and limitations:

  • By default, configuration files are stored locally in the Harness Manager.
  • You can store local configuration files in a remote repo by syncing your Harness Application with a repo, as described in Configuration as Code.
  • If you use remote configuration files for Manifests and sync your Application with a repo, nothing from the remote manifest is synced with the repo that the Application is synched with. The remote configuration files for Manifests are simply pulled at runtime.

Local Configuration Files

By default, configuration files are stored locally in the Harness Manager. You can add or edit files as needed.

Editing Files

To edit a file, do the following:

  1. Click the Edit button.
  2. Enter your YAML, and then click Save.
Harness validates the YAML in the editor at runtime.
Manage Files and Folders

To add, rename, or delete a file or folder in the Manifests section, do the following:

  1. Click the vertical ellipsis character next to a folder or file. The file options appear.
  2. Click Add File, Rename File, or DeleteFile. In this case, let's add a file. Click Add File. The Add File dialog appears.
  3. Enter the file name for the new file. To create a folder along with the file, include the new folder in the path, such as legacy/frontend.yaml.
  4. Click SUBMIT. The new file and folder are created.

Harness Variables in Values File

You can use Harness built-in variables in the values.yaml file by entering $. The available variables are displayed:

Click the variables name and it will be added the values.yaml file with the required syntax.

Remote Configuration Files

You can use your Git repo for the configuration files in Manifests and Harness will use them at runtime. You have two options for remote files:

  • Standard Kubernetes Resources in YAML - These files are simply the YAML manifest files stored on a remote Git repo.
  • Helm Chart Source Files - These are Helm chart files stored in standard Helm syntax in YAML on a remote Git repo or Helm repo.

For steps on using a Helm Repository, see Helm Charts.

At runtime, the Harness Delegate pulls the remote configuration files from the repo and then uses them to create resources via the Kubernetes API. It does not matter if the Delegate runs in the same Kubernetes cluster as the deployed pods. The Kubernetes API is used by the Delegate regardless of the cluster networking topology.

You can also use a Git repo for your entire Harness Application, and sync it unidirectionally or bidirectionally. For more information, see Configuration as Code. There is no conflict between the Git repo used for remote Manifests files and the Git repo used for the entire Harness Application.

To use remote configuration files, do the following:

  1. In Manifests, click the vertical ellipsis and click Use Remote Manifests.
    The Remote Manifests dialog appears.

  1. Fill out the Remote Manifests dialog and click SUBMIT. The Remote Manifests dialog has the following fields.
  • Manifest Format - Select Standard Kubernetes Resources in YAML format, Helm Chart Source Files in YAML format, Helm Chart from Source Repository.
  • Git Connector - Select a SourceRepo Provider for the Git repo you added to your Harness account. For more information, see Add SourceRepo Providers.
  • Commit ID - Select Latest from Branch or Specific Commit ID.
  • Branch/Commit ID - Required. Enter the branch or commit ID for the remote repo.
  • File/Folder path(s) - Enter the repo file and folder path.
If you want to use Go templating in your remote repo for your configuration files in Manifests, ensure that the values.yaml file is at the root of the folder path you select.

When the remote manifests are added, the Manifests section displays the connection details.

When you deploy a Workflow or Pipeline that uses this Service, you can see the Delegate fetch the Manifests files from the repo in the Fetch Files section of the log Harness Deployments:

Fetching files from git

Git connector Url: https://github.com/michaelcretzman/harness-example # remote manifest files
Branch: example # Git repo branch

Fetching NGINX/values.yaml # values.yaml file in repo
Successfully fetched NGINX/values.yaml

Fetching manifest files at path: NGINX/ # manifest files in repo

Successfully fetched following manifest [.yaml] files:
- templates/spec.yaml # manifest file with ConfigMap and Deployment objects

Done.

If you experience errors fetching the remote files, it is most likely because the wrong branch has been configured in the Git Connector.

To return to local configuration files, click the vertical ellipsis and select Use Local Manifests.

Your remote files are not copied locally. You are simply presented with the local configuration files you used last.

Config Variables

You can create Service-level variables to use throughout your Service configuration settings.

To use service-level variables, do the following:

  1. In Configuration, in Config Variables, click Add Variable. The Config Variable dialog appears.
  2. In Name, enter a name for your variable. This is the name you will use to reference your variable anywhere in your Service.
  3. In Type, select Text or Encrypted Text. If you selected Text, enter a string in Value.

    If you select Encrypted Text, the Value field becomes a drop-down and you can select any of the Encrypted Texts you have configured in Harness Secrets Management.

    An Encrypted Text secret is available here only if its Usage Scope includes the current Application. For more information, see Secret Management.

    You can also click Add New Encrypted Text to create a new secret. It will be stored in Secrets Management with this Application in its Usage Scope.

    You can also use Harness variables in the Value field. To use a variable, enter $ and see the available variables.

For example, to add artifact variables, enter ${artifact and the available artifact variables are displayed. For more information about variables, see Variables and Expressions in Harness.

  1. Click SUBMIT. The variable is added to Config Variables.
  2. To reference the variable in your Manifest values.yaml file, type ${serviceVariable and Harness will provide a drop down list of all variables. Begin typing the name of your variable to find it and select it. Or you can simply type in ${serviceVariable and the variable name, like ${serviceVariable.test}.

For example, let's say you added the variable test to Config Variables.

To reference that variable in the values.yaml file, you would do this:

myvar: ${serviceVariable.test}
Secrets in Config Variables

When you select Encrypted Text as the option in the Config Variable dialog Type field, you can select one of the encrypted text secrets you have in the Harness vault. For information about adding encrypted text secrets to the Harness vault, see Secrets Management.

Let's look at an example where a Config Variable named password has been added.

Here is how the values.yaml references the variable:

password: ${serviceVariable.password}

Here is how the values.yaml variable is used in a manifest file:

apiVersion: v1
kind: Secret
metadata:
name: sample
stringData:
password: {{.Values.password}}
type: Opaque
For encrypted text, Harness stores it in the Harness vault (or your vault) and uses it at runtime. You do not need to manage the secret outside of Harness. For more information, and to see where encrypted text is stored in Harness, see Secrets Management.

Helm Charts

You can use remote Helm charts in a Git Repo or a Helm Repository.

To use a Helm chart in a remote Git repo, follow the steps in Remote Configuration Files.

To use a Helm Repo, do the following:

  1. Add a connection to the Helm repo in Harness as a Helm Repository Artifact Server. See Helm Repository.
  2. In your Harness Kubernetes Service, in Manifests, click Use Remote Manifests.
    The Remote Manifests dialog appears.
  3. In Manifest Format, select Helm Chart from Helm Repository.
  4. In Helm Repository, select the Helm Repository Artifact Server you added to Harness.
  5. In Chart Name, enter the name of the chart in that repo. For example, we use nginx.
  6. In Chart Version, enter the chart version to use. This is found in the Chart.yaml version label. For this guide, we will use 1.0.1. If you leave this field empty Harness gets the latest chart.

When you are finished, the dialog will look like this:

Click SUBMIT. The Helm repo is added to Manifests.

When you deploy a Workflow using a Harness Kubernetes Service set up with a Helm Repository, you will see Harness fetch the chart:

Next, you will see Harness initialize using the chart:

Config Files

Files added in the Config Files section are referenced using the configFile.getAsString("fileName") Harness expression:

  • configFile.getAsString("fileName") - Unencrypted text file.
  • configFile.getAsBase64("fileName") - Encrypted text file.

For example, let's add a Config Files file named config-file-example.txt.

You would reference this file in the values.yaml file like this:

configFile.getAsString("config-file-example.txt")

Another example is having two Config Files entries that reference encrypted files in the Harness vault, one for a TLS certificate and one for a TLS key.

In the values.yaml file we could reference the files like this:

tlscrt: ${configFile.getAsBase64("tlscrt")}
tlskey: ${configFile.getAsBase64("tlskey")}

In a Kubernetes Secret specification file in Manifests, it will be used like this:

apiVersion: v1
kind: Secret
metadata:
name: sample
data:
tls.crt: {{.Values.tlscrt}}
tls.key: {{.Values.tlskey}}
type: kubernetes.io/tls

At runtime, Harness takes the encrypted file from its (or your) KMS vault and sends it to Kubernetes via the Kubernetes API. For more information, see Secrets Management in Harness docs and Secrets from Kubernetes.

Values YAML Override

In the Values YAML Override section, you can enter the YAML for your values.yaml file that overrides a remote manifest file or remote Helm chart file.

You can override the values using the Local or Remote options.

For Remote, do the following:

  1. In Source Repository, select the Git repo you added as a Source Repo Provider.
  2. For Commit ID, select either Latest from Branch and enter in the branch name, or Specific Commit ID and enter in the commit ID.
  3. In File path, enter the path to the values.yaml file in the repo, including the repo name, like helm/values.yaml.

Values in Services can also be overwritten in Harness Environments. For more information, see Override Service Settings.

Kubernetes Environments

Once you've added a Service to your Application, you can define Environments where your Service can be deployed. In an Environment, you specify the following:

  • A Harness Service, such as a Service with a Docker image artifact you configured.
  • A deployment type, such as Kubernetes.
  • A Cloud Provider, such as a Kubernetes cluster or Google Cloud Platform that you added in Add Cloud Providers.

An environment can be a Dev, QA, Production, or other environment. You can deploy one or many Services to each environment.

Create a New Harness Environment

The following procedure creates an Environment for the Service (Kubernetes type) we set up.

  1. In your Harness Application, click Environments. The Environments page appears.
  2. Click Add Environment. The Environment dialog appears.
  3. In Name, enter a name that describes the deployment environment, for example, K8s-GCP.
  4. In Environment Type, select Non-Production.
  5. Click SUBMIT. The new Environment page appears.

Add a Service Infrastructure

You define the Kubernetes cluster to use for deployment as a Service Infrastructure.

To add a service infrastructure, do the following:

  1. In the Harness Environment, click Add Service Infrastructure. The Service Infrastructure dialog appears.
  2. In Service, select the Harness service you created earlier for your Docker image.
  3. In Cloud Provider, select the Cloud Provider you added earlier.
  4. Click Next. The Configuration section is automatically populated with the clusters located using the Cloud Provider connection. If the Cluster Name drop-down is taking a long time to load, check the connectivity to the Cloud Provider of the host running the Harness Delegate.
  5. In Cluster Name, select the cluster you created for this deployment. If you are using the Delegate to authenticate the Cloud Provider, the cluster is already listed.
  6. In Namespace, select the namespace of the Kubernetes cluster. Typically, this is default.

The namespace entered in Namespace must already exist during deployment. Harness will not create a new namespace if you enter one here. You must define the Kubernetes Namespace object in a manifest in the Service Manifests section. You can use the Harness variable ${infra.kubernetes.namespace} to reference the value you entered in the Service Infrastructure Namespace field.

This setup is described in Example 4: Creating Namespace based on Environment InfraMapping.

When you are finished with the Configuration section, the dialog will look something like this:

  1. Click SUBMIT. The new service infrastructure is added to the Harness environment.

That is all you have to do to set up the deployment Environment in Harness. You have the Service and deployment Environment set up. Now you can create the deployment Workflow in Harness.

Override Service Settings

Your Service Infrastructure can overwrite Service Config Variables, Config Files, and values.yaml settings. This enables you to have a Service keep its settings but have them changed when used with this Environment. For example, you might have a single Service but an Environment for QA and an Environment for Production, and you want to overwrite the namespace setting in the Service depending on the Environment.

You can also overwrite Service variables at the Phase-level of a multiple Phase Workflow.

To override a Service setting, do the following:

  1. In the Harness Environment, in the Service Configuration Overrides section, click Add Configuration Overrides. The Service Configuration Override dialog appears.
  2. In Service, select the Service you are using for your Kubernetes deployment. The override types appear.
  3. Click Variable Override. The Variable Override options appear.
    1. In Configuration Variable, select a variable configured in the Service's Config Variables settings.
    2. In Type, select Text or Encrypted Text.
    3. In Override Value, enter the value to overwrite the variable value in the Service. If you selected Encrypted Text in Type, you can select an Encrypted Text values defined in Secrets Management.
  4. Click File Override.
    1. In File Name, select a file configured in the Service's Config Files settings.
    2. In File, select the file to overwrite the Service's Config Files file.
  5. Click Values YAML. Click Local or Remote.
    1. Local - Enter in the values.yaml variables and values just as your would in a Service Manifests values.yaml. Ensure the name of the variable you want to overwrite is identical.
    2. Remote - Specify the Git repo branch and folder for the remote manifest files to use, as explained in Remote Configuration Files.

Here is an example of overwriting a Service values.yaml with a Service Configuration Override.

In the Service values.yaml, we have a variable for replicas:

replicas: 1

This is used in the manifest file like this:

...
spec:
replicas: {{int.Values.replicas}}
...

Now, in Service Configuration Override, you can overwrite the Service values.yaml replicas value:

At deployment runtime to this Environment, the overwritten replicas values is used:

INFO   2019-02-15 11:32:44    kind: Deployment
INFO 2019-02-15 11:32:44 metadata:
INFO 2019-02-15 11:32:44 name: nginx-deployment
INFO 2019-02-15 11:32:44 labels:
INFO 2019-02-15 11:32:44 app: nginx
INFO 2019-02-15 11:32:44 spec:
INFO 2019-02-15 11:32:44 replicas: 3

Canary Workflows and Deployments

This section will walk through creating a Canary Workflow in Harness and what the Workflow steps deployment logs include.

By default, Harness Canary Workflows have two phases:

  1. Harness creates a Canary version of the Kubernetes Deployment object defined in your Service Manifests section. Once that Deployment is verified, the Workflow deletes it by default.
  2. Run the actual deployment using a Kubernetes RollingUpdate with the number of pods you specify in the Service Manifests files (for example, replicas: 3).
If you are new to Kubernetes RollingUpdate deployments, see Performing a Rolling Update from Kubernetes. That guide summaries RollingUpdate and provides an interactive, online tutorial.

To create a Canary Workflow for Kubernetes, do the following:

  1. In your Application, click Workflows.
  2. Click Add Workflow. The Workflow dialog appears.
  3. In Name, enter a name for your Workflow, such as NGINX-K8s.
  4. In Workflow Type, select Canary Deployment.
  5. In Environment, select the Environment you create for your Kubernetes deployment.
  6. Click SUBMIT. By default, the new Canary Workflow does not have any phases pre-configured.
    We will create the two Phases for the Canary Deployment next.

Canary Phase

The Canary Phase creates a Canary deployment using your Service Manifests files and the number of pods you specify in the Canary Deployment step in the Workflow.

To add the Canary Phase, do the following:

  1. In Deployment Phases, click Add Phase. The Workflow Phase dialog appears.

  1. In Service, select the Service where you set up your Kubernetes configuration files.
  2. In Service Infrastructure, select the Service Infrastructure where you want this Workflow Phase to deploy your Kubernetes objects. This is the Service Infrastructure with the Kubernetes cluster and namespace for this Phase's deployment.
  3. In Service Variable Overrides, you can add a variable to overwrite any variable in the Service you selected. Ensure that the variable names are identical. This is the same process described for overwriting variables in Override Service Settings.
  4. Click SUBMIT. The new Phase is created.

You'll notice the Phase is titled Canary automatically. Let's look at the default settings for this first Phase of a Canary Workflow.

Canary Deployment Step

Click the Canary Deployment step. The Canary Deployment step dialog appears.

In this step, you will define how many pods are deployed for a Canary test of the configuration files in your Service Manifests section.

  1. In Instance Unit Type, click COUNT or PERCENTAGE.
  2. In Instances, enter the number of pods to deploy.
    If you selected COUNT in Instance Unit Type, this is simply the number of pods.
    If you selected PERCENTAGE, enter a percentage of the pods defined in your Service Manifests files to deploy. For example, in you have replicas: 4 in a manifest in Service, and you enter 50 in Instances, then 2 pods are deployed in this Phase step.
Canary Deployment Step in Deployment

Let's look at an example where the Canary Deployment step is configured to deploy a COUNT of 2. Here is the step in the Harness Deployments page:

You can see Target Instance Count 2 in the Details section.

Below Details you can see the logs for the step.

Let's look at the Prepare, Apply, and Wait for Steady State sections of the step's deployment log, with comments added:

Prepare

Here is the log from the Prepare section:

Manifests processed. Found following resources: 

# API objects in manifest file

Kind Name Versioned
ConfigMap harness-example-config true
Deployment harness-example-deployment false

# each deployment is versioned, this is the second deployment

Current release number is: 2

Versioning resources.

# previous deployment

Previous Successful Release is 1

Cleaning up older and failed releases

# existing number if pods

Current replica count is 1

# Deployment workload executed

Canary Workload is: Deployment/harness-example-deployment-canary

# number specified in Canary Deployment step Instance field

Target replica count for Canary is 2

Done.

The name of the Deployment workload in the Service Manifests file is harness-example-deployment (the name variable is harness-example): .

apiVersion: apps/v1
kind: Deployment
metadata:
name: {{.Values.name}}-deployment

As you can see, Harness appends the name with -canary, harness-example-deployment-canary. This is to identify Canary Deployment step workloads in your cluster.

The next section is Apply.

Apply

Here you will see the manifests in the Server Manifests section applied using kubectl as a single file, manifests.yaml.

# kubectl command to apply manifests

kubectl --kubeconfig=config apply --filename=manifests.yaml --record

# ConfigMap object created

configmap/harness-example-config-2 created

# Deployment object created

deployment.apps/harness-example-deployment-canary created

Done.

Next, Harness logs the steady state of the pods.

Wait for Steady State

Harness displays the status of each pod deployed and confirms steady state.

# kubectl command for get events

kubectl --kubeconfig=config get events --output=custom-columns=KIND:involvedObject.kind,NAME:.involvedObject.name,MESSAGE:.message,REASON:.reason --watch-only

# kubectl command for status

kubectl --kubeconfig=config rollout status Deployment/harness-example-deployment-canary --watch=true

# status of each pod

Status : Waiting for deployment "harness-example-deployment-canary" rollout to finish: 0 of 2 updated replicas are available...
Event : Pod harness-example-deployment-canary-8675b5b8bf-98sf6 MountVolume.SetUp succeeded for volume "default-token-hwzdf" SuccessfulMountVolume
Event : Pod harness-example-deployment-canary-8675b5b8bf-rl2n8 pulling image "registry.hub.docker.com/library/nginx:stable-perl" Pulling
Event : Pod harness-example-deployment-canary-8675b5b8bf-98sf6 pulling image "registry.hub.docker.com/library/nginx:stable-perl" Pulling
Event : Pod harness-example-deployment-canary-8675b5b8bf-rl2n8 Successfully pulled image "registry.hub.docker.com/library/nginx:stable-perl" Pulled
Event : Pod harness-example-deployment-canary-8675b5b8bf-98sf6 Successfully pulled image "registry.hub.docker.com/library/nginx:stable-perl" Pulled
Event : Pod harness-example-deployment-canary-8675b5b8bf-rl2n8 Created container Created
Event : Pod harness-example-deployment-canary-8675b5b8bf-98sf6 Created container Created
Event : Pod harness-example-deployment-canary-8675b5b8bf-rl2n8 Started container Started
Event : Pod harness-example-deployment-canary-8675b5b8bf-98sf6 Started container Started

Status : Waiting for deployment "harness-example-deployment-canary" rollout to finish: 1 of 2 updated replicas are available...

# canary deployment step completed

Status : deployment "harness-example-deployment-canary" successfully rolled out

Done.

Wrap Up

The Wrap Up log is long and describes all of the container and pod information for the step, using the kubectl command:

kubectl --kubeconfig=config describe --filename=manifests.yaml
Canary Delete Step

Since the Canary Deployment step was successful, it is no longer needed. The Canary Delete step is used to clean up the workload deployed by the Canary Deployment step.

Harness does not roll back Canary deployments because your production is not affected during Canary. Canary catches issues before moving to production. Also, you might want to analyze the Canary deployment. The Canary Delete step is useful to perform cleanup when required.

In the Wrap Up section of the Workflow is the Canary Delete step.

This step deletes the specified resources. Be default, this is the workload deployed in the Canary Deployment step, represented with the variable, ${k8s.canaryWorkload}. You can add a namespace before the resource name, like this example/${k8s.canaryWorkload}.

You can also use the Canary Delete step in the last Workflow of a Pipeline to clean up any workloads you no longer need.

Phase 1 of the Canary Deployment Workflow is complete. Now the Workflow needs a Primary Phase to roll out the objects defined in the Service Manifests section.

The Canary Delete step discussed here is just the Delete step renamed to Canary Delete. You can add a Delete step in any Workflow and simply identify the resource to delete.

Primary Phase

The Primary Phase runs the actual deployment as a rolling update with the number of pods you specify in the Service Manifests files (for example, replicas: 3).

Similar to application-scaling, during a rolling update of a Deployment, the Kubernetes service will load-balance the traffic only to available pods (an instance that is available to the users of the application) during the update.

To add the Primary Phase, do the following:

  1. In your Workflow, in Deployment Phases, under Canary, click Add Phase.
    The Workflow Phase dialog appears.
  2. In Service, select the same Service you selected in Phase 1.
  3. In Service Infrastructure, select the same Service Infrastructure you selected in Phase 1.
  4. In Service Variable Overrides, you can add a variable to overwrite any variable in the Service you selected. Ensure that the variable names are identical. This is the same process described for overwriting variables in Override Service Settings.
  5. Click SUBMIT. The new Phase is created.

The Phase is named Primary automatically, and contains one step, Rollout Deployment.

Rollout Deployment performs a rolling update. Rolling updates allow an update of a Deployment to take place with zero downtime by incrementally updating pod instances with new ones. The new pods are scheduled on nodes with available resources. The rolling update Deployment uses the number of pods you specified in the Service Manifests (number of replicas).

Primary Step in Deployment

Let's look at an example where the Primary step deploys the Service Manifests objects. Here is the step in the Harness Deployments page:

Before we look at the logs, let's look at the Service Manifests files it's deploying.

Here is the values.yaml from our Service Manifests section:

name: harness-example
replicas: 1
image: ${artifact.metadata.image}

Here is the spec.yaml from our Service Manifests section:

apiVersion: v1
kind: ConfigMap
metadata:
name: {{.Values.name}}-config
data:
key: value
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{.Values.name}}-deployment
spec:
replicas: {{int .Values.replicas}}
selector:
matchLabels:
app: {{.Values.name}}
template:
metadata:
labels:
app: {{.Values.name}}
spec:
containers:
- name: {{.Values.name}}
image: {{.Values.image}}
envFrom:
- configMapRef:
name: {{.Values.name}}-config
ports:
- containerPort: 80

Let's look at the Initialize, Prepare, and Apply stages of the Rollout Deployment.

Initialize

In the Initialize section of the Rollout Deployment step, you can see the same object descriptions as the Service Manifests section:

Initializing..

Manifests [Post template rendering] :

# displays the manifests taken from the Service Manifests section

---
apiVersion: v1
kind: ConfigMap
metadata:
name: harness-example-config
data:
key: value
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: harness-example-deployment
spec:
replicas: 1
selector:
matchLabels:
app: harness-example
template:
metadata:
labels:
app: harness-example
spec:
containers:
- name: harness-example
image: registry.hub.docker.com/library/nginx:stable-perl
envFrom:
- configMapRef:
name: harness-example-config
ports:
- containerPort: 80

# Validates the YAML syntax of the manifest with a dry run

Validating manifests with Dry Run

kubectl --kubeconfig=config apply --filename=manifests-dry-run.yaml --dry-run
configmap/harness-example-config created (dry run)
deployment.apps/harness-example-deployment configured (dry run)

Done.

Now that Harness has ensured that manifests can be used, it will process the manifests.

Prepare

In the Prepare section, you can see that Harness versions the ConfigMap and Secret resources (for more information, see Releases and Versioning).

Manifests processed. Found following resources: 

# determine if the resources are versioned

Kind Name Versioned
ConfigMap harness-example-config true
Deployment harness-example-deployment false

# indicates that these objects have been released before

Current release number is: 2

Previous Successful Release is 1

# removed unneeded releases

Cleaning up older and failed releases

# identifies new Deployment workload

Managed Workload is: Deployment/harness-example-deployment

# versions the new release

Versioning resources.

Done.

Now Harness can apply the manifests.

Apply

The Apply section shows the kubectl commands for applying your manifests.

# the Service Manifests section are compiled into one file and applied

kubectl --kubeconfig=config apply --filename=manifests.yaml --record

# the objects applied

configmap/harness-example-config-2 configured
deployment.apps/harness-example-deployment configured

Done.

Now that the manifests are applied, you can see the container and pod details described in Wrap Up.

Wrap Up

Wrap Up is long and uses a kubectl describe command to provide information on all containers and pods deployed:

kubectl --kubeconfig=config describe --filename=manifests.yaml

Here is a sample from the output that displays the Kubernetes RollingUpdate:

# Deployment name

Name: harness-example-deployment

# namespace from Deployment manifest

Namespace: default
CreationTimestamp: Wed, 13 Feb 2019 01:00:49 +0000
Labels: <none>
Annotations: deployment.kubernetes.io/revision: 2
kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"apps/v1","kind":"Deployment","metadata":{"annotations":{"kubernetes.io/change-cause":"kubectl apply --kubeconfig=config --f...
kubernetes.io/change-cause: kubectl apply --kubeconfig=config --filename=manifests.yaml --record=true

# Selector applied

Selector: app=harness-example,harness.io/track=stable

# number of replicas from the manifest

Replicas: 1 desired | 1 updated | 1 total | 1 available | 0 unavailable

# RollingUpdate strategy

StrategyType: RollingUpdate
MinReadySeconds: 0

# RollingUpdate progression

RollingUpdateStrategy: 25% max unavailable, 25% max surge

As you look through the description in Wrap Up you can see label added:

add label: harness.io/track-stable 

You can use the harness.io/track-stable selector for managing traffic to these pods, or for testing the pods. For more information, see Harness Annotations and Labels.

Canary Workflow Deployment

Now that the setup is complete, you can click Deploy in the Workflow to deploy the artifact to your cluster.

Next, select the artifact build version and click SUBMIT.

The Workflow is deployed.

Now that you have successfully deployed your artifact to your Kubernetes cluster pods using your Harness Application, look at the completed workload in the deployment environment of your Kubernetes cluster.

For example, here is the Deployment workload in Google Cloud Kubernetes Engine:

Or you can simply connect to your cluster in a terminal and see the pod(s) deployed:

john_doe@cloudshell:~ (project-15454)$ kubectl get pods
NAME READY STATUS RESTARTS AGE
harness-example-deployment-7df7559456-xdwg5 1/1 Running 0 9h

Blue/Green Workflows and Deployments

This section describes how to create a the Harness Service and Workflow for a Blue/Green Kubernetes deployment. There are no Blue/Green-specific settings for the Harness Environment (see Kubernetes Environments for steps on setting up the Harness Environment for any Kubernetes deployment).

Blue/Green Deployment Summary

Blue/Green deployment is a method that reduces downtime and risk by running two identical environments called Blue and Green. At any time, only one of the environments is live, serving all production traffic. For example, Blue is live (primary) and Green is idle (stage).

As you prepare a new version of your software, deployment and the final stage of testing takes place in the stage environment, for example Green. Once you have deployed and fully tested the software in Green, you switch the network routing so all incoming requests now go to Green instead of Blue. Green is now live, and Blue is idle. In Kubernetes, a primary and a stage Kubernetes service are swapped.

For information on Kubernetes services, see Services from Kubernetes.

Blue/Green Rollback

A great benefit of a Blue/Green deployment is rapid rollback: rolling back to the old version of a service/artifact is simple and reliable because network traffic is simply routed back to the original instances. You do not need to redeploy previous versions of the service/artifact and the instances that comprised their environment.

Kubernetes Service for Blue/Green

When you create a Harness Service for a Blue/Green deployment, you need to include a specification for both of the Kubernetes services using in Blue/Green.

Harness refers to the two services as primary and stage Kubernetes services, distinguished using the following mandatory annotations:

  • Primary - annotations: harness.io/primary-service: true
  • Stage - annotations: harness.io/stage-service: true

No other labels or values are need to distinguish the two Kubernetes services.

Here is an example Harness Service Manifests section for Blue/Green.

You can see the primary Kubernetes service, service-green.yaml, and the stage service, service-blue.yaml. The spec.yaml file contains the Deployment object manifest, and the values.yaml is used for Go templating.

Let's quickly look at the Service Manifests files for a Blue/Green deployment. If you have not created a Kubernetes type Harness Service, see Kubernetes Services.

Here is the values.yaml:

name: harness-example
replicas: 3
image: ${artifact.metadata.image}

Here is the service-green.yaml (primary) manifest file:

apiVersion: v1
kind: Service
metadata:
name: {{.Values.name}}-svc-primary

# mandatory annotation indicating primary service

annotations:
harness.io/primary-service: true

labels:
app: bg-demo-app
spec:
type: ClusterIP
ports:
- port: 80
protocol: TCP
selector:
app: bg-demo-app

Here is the service-blue.yaml (stage) manifest file:

apiVersion: v1
kind: Service
metadata:
name: {{.Values.name}}-svc-stage

# mandatory annotation indicating stage service

annotations:
harness.io/stage-service: true

labels:
app: bg-demo-app
spec:
type: ClusterIP
ports:
- port: 80
protocol: TCP
selector:
app: bg-demo-app

Here is the deployment.yaml that contains the Deployment manifest:

apiVersion: apps/v1
kind: Deployment
metadata:
name: {{.Values.name}}
spec:
selector:
matchLabels:
app: bg-demo-app
replicas: {{.Values.replicas}}
template:
metadata:
labels:
app: bg-demo-app
spec:
containers:
- name: my-nginx
image: nginx
ports:
- containerPort: 80

That is all that is needed to set up a simple Harness Service for Kubernetes Blue/Green deployment.

Kubernetes Workflows and Deployments for Blue/Green

When you create a Harness Kubernetes Workflow for Blue/Green deployment, Harness automatically generates the steps for setting up the Kubernetes services you defined in your Harness Service, and for swapping the Kubernetes services between the applications.

To create a Kubernetes Blue/Green Workflow, do the following:

  1. In your Application containing a completed Service and Environment for the Blue/Green deployment, click Workflows.
  2. Click Add Workflow. The Workflow dialog appears.
  3. In Name, enter a name for your Workflow, such as NGINX-K8s-BG.
  4. In Workflow Type, select Blue/Green Deployment.
  5. In Environment, select the Environment you created for your Kubernetes deployment.
  6. In Service, select the Service containing the manifest files you want to use for your deployment.
  7. In Service Infrastructure, select the Service Infrastructure where you want to deploy.
    When you are finished the Workflow dialog will look like this:
  8. Click SUBMIT. The new Workflow appears.

Let's look at each step in the Workflow and its deployment step log.

Stage Deployment

The Stage Deployment step simply deploys the two Kubernetes services you have set up in the Harness Service Manifests section.

When you look at the Stage Deployment step in Harness Deployments, you will see the following log sections.

Initialize

The Initialize stage initializes the two Kubernetes services you have set up in the Harness Service Manifests section (displayed earlier), primary and stage, validating their YAML.

Initializing..

Manifests [Post template rendering] :

---
apiVersion: v1
kind: Service
metadata:
name: harness-example-svc-primary
annotations:
harness.io/primary-service: true
labels:
app: bg-demo-app
spec:
type: ClusterIP
ports:
- port: 80
protocol: TCP
selector:
app: bg-demo-app
---
apiVersion: v1
kind: Service
metadata:
name: harness-example-svc-stage
annotations:
harness.io/stage-service: true
labels:
app: bg-demo-app
spec:
type: ClusterIP
ports:
- port: 80
protocol: TCP
selector:
app: bg-demo-app
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: harness-example
spec:
selector:
matchLabels:
app: bg-demo-app
replicas: 3
template:
metadata:
labels:
app: bg-demo-app
spec:
containers:
- name: my-nginx
image: nginx
ports:
- containerPort: 80


Validating manifests with Dry Run

kubectl --kubeconfig=config apply --filename=manifests-dry-run.yaml --dry-run
service/harness-example-svc-primary configured (dry run)
service/harness-example-svc-stage configured (dry run)
deployment.apps/harness-example created (dry run)

Done.

Prepare

Typically, in the Prepare section, you can see that each release of the resources is versioned. This is used in case Harness needs to rollback to a previous version. In the case of Blue/Green, the resources are not versioned because a Blue/Green deployment uses rapid rollback: network traffic is simply routed back to the original instances. You do not need to redeploy previous versions of the service/artifact and the instances that comprised their environment.

Manifests processed. Found following resources: 

Kind Name Versioned
Service harness-example-svc-primary false
Service harness-example-svc-stage false
Deployment harness-example false

Primary Service is at color: blue
Stage Service is at color: green

Cleaning up non primary releases

Current release number is: 2

Versioning resources.

Workload to deploy is: Deployment/harness-example-green

Done.

Apply

The Apply section applies a combination of all of the manifests in the Service Manifests section as one file using kubectl apply.

kubectl --kubeconfig=config apply --filename=manifests.yaml --record

service/harness-example-svc-primary configured
service/harness-example-svc-stage configured
deployment.apps/harness-example-blue configured

Done.

Wait for Steady State

The Wait for Steady State section displays the blue service rollout event.

kubectl --kubeconfig=config get events --output=custom-columns=KIND:involvedObject.kind,NAME:.involvedObject.name,MESSAGE:.message,REASON:.reason --watch-only

kubectl --kubeconfig=config rollout status Deployment/harness-example-blue --watch=true


Status : deployment "harness-example-blue" successfully rolled out

Done.

Next, the Swap Primary with Stage Workflow step will swap the blue and green services to route primary network traffic to the new version of the application, and stage network traffic to the old version.

Swap Primary with Stage

In the Blue/Green Workflow, click the Swap Primary with Stage step.

You can see that the primary Kubernetes service is represented by the variable ${k8s.primaryServiceName}, and the stage service by the variable ${k8s.stageServiceName}. You can see how the swap works in the Swap Primary with Stage step in Harness Deployments.

Here is the log for the step, where the mandatory Selectors you used in the Harness Service Manifests files are used.

Begin execution of command Kubernetes Swap Service Selectors

Selectors for Service One : [name:harness-example-svc-primary]
app: bg-demo-app
harness.io/color: green


Selectors for Service Two : [name:harness-example-svc-stage]
app: bg-demo-app
harness.io/color: blue


Swapping Service Selectors..

Updated Selectors for Service One : [name:harness-example-svc-primary]
app: bg-demo-app
harness.io/color: blue


Updated Selectors for Service Two : [name:harness-example-svc-stage]
app: bg-demo-app
harness.io/color: green

Done
The Swap Primary with Stage command is simply the Swap Service Selectors command renamed to Swap Primary with Stage for this Workflow type. You can use Swap Service Selectors to swap the pods referred to by any two Kubernetes services. You simply put the expressions for any two services (${k8s.primaryServiceName}, ${k8s.stageServiceName}) and they will be swapped. For example, you can have a Blue/Green deployment Workflow to swap services and then a separate Workflow that uses the Swap Service Selectors command to manually swap back when needed.

Blue/Green Workflow Deployment

Now that the setup is complete, you can click Deploy in the Workflow to deploy the artifact to your cluster.

Next, select the artifact build version and click SUBMIT.

The Workflow is deployed.

The swap is complete and the Blue/Green deployment was a success.

In the Deployments page, expand the Workflow steps and click the Swap Primary with Stage step.

In the Details section, click the vertical ellipsis and click View Execution Context.

You can see that the names and of primary and stage services deployed.

Now that you have successfully deployed your artifact to your Kubernetes cluster pods using your Harness Application, look at the completed workload in the deployment environment of your Kubernetes cluster.

For example, here is the Blue/Green workload in Google Cloud Kubernetes Engine, displaying the blue and green services and Deployment workload:

If you click a workload, you will see the pods and service created:

Rolling Update Workflows and Deployments

A rolling updates strategy update Kubernetes Deployments with zero downtime by incrementally updating pods instances with new ones. New Pods are scheduled on nodes with available resources.

For a detailed explanation, see Performing a Rolling Update from Kubernetes.

Kubernetes Service for Rolling Update

There are not mandatory Rolling Update-specific settings for the Harness Service. You can use any Kubernetes configuration in your Service Manifests section.

The default Rolling Update strategy used by Harness is:

RollingUpdateStrategy:  25% max unavailable, 25% max surge

If you want to set a Rolling Update strategy that is different from the default, you can include the strategy settings in your Deployment manifest:

strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 1
maxUnavailable: 1

For details on the settings, see RollingUpdateDeployment in the Kubernetes API.

Kubernetes Workflows and Deployments for Rolling Update

To create a Kubernetes Rollout Deployment Workflow, do the following:

  1. In your Application containing a completed Service and Environment for the Rollout Deployment, click Workflows.
  2. Click Add Workflow. The Workflow dialog appears.
  3. In Name, enter a name for your Workflow, such as NGINX-K8s-Rolling.
  4. In Workflow Type, select Rolling Deployment.
  5. In Environment, select the Environment you created for your Kubernetes deployment.
  6. In Service, select the Service containing the manifest files you want to use for your deployment.
  7. In Service Infrastructure, select the Service Infrastructure where you want to deploy.
  8. When you are finished the Workflow dialog will look like this:

  1. Click SUBMIT. The new Workflow appears.

The Workflow generates the Rollout Deployment step automatically. There is nothing to update. You can deploy the Workflow.

Let's look at what the Rollout Deployment step does in the deployment logs.

Apply

The Apply section deploys the manifests from the Service Manifests section as one file.

kubectl --kubeconfig=config apply --filename=manifests.yaml --record

configmap/harness-example-config-3 configured
deployment.apps/harness-example-deployment created

Done.

Wait for Steady State

The Wait for Steady State section shows the containers and pods rolled out.

kubectl --kubeconfig=config get events --output=custom-columns=KIND:involvedObject.kind,NAME:.involvedObject.name,MESSAGE:.message,REASON:.reason --watch-only

kubectl --kubeconfig=config rollout status Deployment/harness-example-deployment --watch=true


Status : Waiting for deployment "harness-example-deployment" rollout to finish: 0 of 2 updated replicas are available...
Event : Pod harness-example-deployment-5674658766-6b2fw Successfully pulled image "registry.hub.docker.com/library/nginx:stable-perl" Pulled
Event : Pod harness-example-deployment-5674658766-p9lpz Successfully pulled image "registry.hub.docker.com/library/nginx:stable-perl" Pulled
Event : Pod harness-example-deployment-5674658766-6b2fw Created container Created
Event : Pod harness-example-deployment-5674658766-p9lpz Created container Created
Event : Pod harness-example-deployment-5674658766-6b2fw Started container Started
Event : Pod harness-example-deployment-5674658766-p9lpz Started container Started

Status : Waiting for deployment "harness-example-deployment" rollout to finish: 1 of 2 updated replicas are available...

Status : deployment "harness-example-deployment" successfully rolled out

Done.

Wrap Up

The Wrap Up section shows the Rolling Update strategy used.

...
Name: harness-example-deployment
Namespace: default
CreationTimestamp: Sun, 17 Feb 2019 22:03:53 +0000
Labels: <none>
Annotations: deployment.kubernetes.io/revision: 1
kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"apps/v1","kind":"Deployment","metadata":{"annotations":{"kubernetes.io/change-cause":"kubectl apply --kubeconfig=config --f...
kubernetes.io/change-cause: kubectl apply --kubeconfig=config --filename=manifests.yaml --record=true
Selector: app=harness-example
Replicas: 2 desired | 2 updated | 2 total | 2 available | 0 unavailable
StrategyType: RollingUpdate
MinReadySeconds: 0
RollingUpdateStrategy: 25% max unavailable, 25% max surge
...
NewReplicaSet: harness-example-deployment-5674658766 (2/2 replicas created)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal ScalingReplicaSet 8s deployment-controller Scaled up replica set harness-example-deployment-5674658766 to 2

Done.

Rolling Update Workflow Deployment

Now that the setup is complete, you can click Deploy in the Workflow to deploy the artifact to your cluster.

Next, select the artifact build version and click SUBMIT.

The Workflow is deployed.

To see the completed deployment, log into your cluster and run kubectl get all. The output lists the new Deployment:

NAME                                                   READY     STATUS    RESTARTS   AGE
pod/harness-example-deployment-5674658766-6b2fw 1/1 Running 0 34m
pod/harness-example-deployment-5674658766-p9lpz 1/1 Running 0 34m

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kubernetes ClusterIP 10.83.240.1 <none> 443/TCP 34m

NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
deployment.apps/harness-example-deployment 2 2 2 2 34m

NAME DESIRED CURRENT READY AGE
replicaset.apps/harness-example-deployment-5674658766 2 2 2 34m

Troubleshooting

The following troubleshooting steps should help address common issues.

Invalid Value LabelSelector

If you are deploying different Harness Workflows to the same cluster during testing or experimentation, you might encounter a Selector error such as this:

The Deployment “harness-example-deployment” is invalid: spec.selector: 
Invalid value: v1.LabelSelector{MatchLabels:map[string]string{“app”:“harness-example”},
MatchExpressions:[]v1.LabelSelectorRequirement{}}: field is immutable

This error means that, in the cluster, there is a Deployment with same name which uses a different pod selector.

Delete or rename the Deployment. Let's look at deleting the Deployment. First, get a list of the Deployments:

kubectl get all
...

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kubernetes ClusterIP 10.83.240.1 <none> 443/TCP 18d

NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
deployment.apps/harness-example-deployment 1 1 1 1 4d
...

And then delete the Deployment:

kubectl delete deploy/harness-example-deployment svc/kubernetes

deployment.extensions "harness-example-deployment" deleted

service "kubernetes" deleted

Rerun the Harness deployment and the error should not occur.

Delete and Scale Commands

Kubernetes Workflows include two important commands that help you manage resources, Delete and Scale.

Delete Command

The Delete command removes the resources you identify.

There are two ways to specify the resource to be removed:

  • Using the Harness built-in variable, ${k8s.canaryWorkload}. At runtime, this will resolve to something like Deployment/harness-example-deployment-canary.
  • Using the name of the resource in the format Kind/[Namespace/]Name, with Namespace optional. For example, Deployment/harness-example-deployment-canary.

Here is an example of the log from a Delete command:

Initializing..
...
Resources to delete are:
- Deployment/harness-example-deployment-canary
Done.

Scale Command

The Scale command updates the number of instances running, either by count or percentage. You use it to scale up or down from the number of instances specified before the Scale command. The number could come from the Service manifest or a previous Workflow step, whichever set the number of instances right before the Scale command.

The following Scale command increases the number of instances to 4.

If you have an odd number of instances, such as 3 instances, and then enter 50% in Scale, the number of instances is scaled down to 2.

You can scale down to 0 to remove all instances.

In the Workload field in Scale, you can enter the Harness built-in variable ${k8s.canaryWorkload} or the name of the resource in the format Kind/[Namespace/]Name, with Namespace optional. For example, Deployment/harness-example-deployment-canary. You can scale Deployment, DaemonSet, or StatefulSet.

Summary of Changes in Kubernetes Deployments Version 2

The new version of Kubernetes deployments in Harness (labeled as Kubernetes V2 in the Harness Manager) includes the changes described below.

Kubernetes and Helm Service Separated

Kubernetes and Helm are now supported as separate Harness Service types. This separation is not part of Harness Kubernetes Version 2 features, but is important to note as it is a major change from how Kubernetes and Helm deployments were created in Version 1.

When you create a new Harness Service, you select the Deployment Type for the Service as Kubernetes or Helm:

Service Configuration using Manifest Files

Kubernetes Service definition now supports any type of manifest file to configure your Kubernetes microservice in Services.

Written in YAML, manifest files describe the desired state of your microservice in terms of Kubernetes API objects, such as a Kubernetes Deployment object.

Manifests can be organized in multiple files or single manifest files can include one or more API object descriptions. For example. the default spec.yaml in the Service contains both ConfigMap and Deployment object descriptions, demarcated with the YAML document delimiter ---.

All Kubernetes resources are defined in manifests. There is no automatic creation of Kubernetes namespace, configMap, Secret, and ImagePullSecret.

Kubernetes Service supports local (in Harness Manager) and remote (Git repo) manifest files. Remote files are pulled from the repo and used at deployment runtime.

Go Templating for Manifests

The Harness Service supports Go Templating for the files in its Manifests section, and automatically installs the Go text template package during deployment. You define variables in a values.yaml template file and then use those predefined variables in your manifests. A simple example would be the name variable in values.yaml:

name: harness-example

In a manifest, such as a Deployment manifest, you reference the name variable like this:

...
spec:
containers:
- name: {{.Values.name}}
...

When the manifest is used during deployment, the output contains the name variable value:

INFO   2019-02-12 17:00:47        spec:
INFO 2019-02-12 17:00:47 containers:
INFO 2019-02-12 17:00:47 - name: harness-example
Deployment Strategies

The following deployment strategies are supported in Version 2 Workflows:

  • Canary - This is a two-Phase Workflow. In first Phase, Harness creates a Canary version of the Kubernetes Deployment. Once that Deployment is verified, we delete it by default. In the second Phase, called Primary, the actual Deployment is updated using a Kubernetes RollingUpdate method.
If you are new to Kubernetes RollingUpdate deployments, see Performing a Rolling Update from Kubernetes. That guide summaries RollingUpdate and provides an interactive, online tutorial.
  • Blue/Green - Blue/Green deployment is a method that reduces downtime and risk by running two identical environments called Blue and Green, both hosting different versions of the same app, and swaps networking routing between them to support primary (live) and stage application delivery. In the Harness deployment, a primary and a stage Kubernetes service are swapped.
  • Rolling - This is a Kubernetes-native RollingUpdate deployment. If you are new to Kubernetes RollingUpdate deployments, see Performing a Rolling Update from Kubernetes.

All deployment methods are covered in this document.

Releases and Versioning

Every Harness deployment creates a new release with an incrementally increasing number. Release history is stored in the Kubernetes cluster in a ConfigMap. This ConfigMap is essential for release tracking, versioning and rollback.

By default, all the ConfigMap and Secrets resources are versioned by Harness. Corresponding references in PodSpec are also updated with versions.

Versioning does not change how you use Secrets. You do not need to reference versions when using Secrets.

For cases where versioning is not required, the manifest entered in the Harness Service Manifests section should be annotated with harness.io/skip-versioning: true.

For example. you might want to skip versioning is for an ImagePullSecret because it never changes, or for TLS certs if they are referred in a Kubernetes container cmd args.

Harness Annotations and Labels

Harness applies labels during Kubernetes deployment that you can use to select objects you defined in your Harness Service Manifests section. Annotations are a way to pass additional metadata for resources to Harness. For a description of Annotations, see Annotations from Kubernetes.

Annotations

The following Annotations can be put on resource specifications in the Harness Service Manifests section.

Annotation

Value

Usage

harness.io/skip-versioning

true|false

To exclude versioning of a resource (ConfigMap or Secret).

harness.io/direct-apply

true|false

If more than one workload is present in the Service Manifests files, use this annotation to exclude all but one.

harness.io/primary-service

true|false

Identifies the primary Kubernetes service in a Blue/Green deployment.

harness.io/stage-service

true|false

Identifies the Kubernetes stage service in a Blue/Green deployment.

Labels

The following labels are applied by Harness during deployment.

Label

Value

Usage

harness.io/release-name

release name

Applied on pods. By default is infraMappingId.

harness.io/track

canary|stable

Applied on pods in a Canary deployment.

harness.io/color

blue|green

Applied on pods in a Blue/Green deployment.

Ingress Rules

For Ingress Rules, you simply add your Service and Ingress manifests to your Harness Service, and then refer to the Service name in the Ingress manifest. In the following manifests, you can see the Service name my-svc referred to in the Ingress manifest.

Service manifest:

apiVersion: v1
kind: Service
metadata:
name: my-svc
spec:
ports:
- name: my-port
port: 8080
protocol: TCP
targetPort: my-container-port
selector:
app: my-deployment
type: ClusterIP

Ingress manifest:

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: my-ingress
annotations:
kubernetes.io/ingress.class: "nginx"
spec:
rules:
- http:
paths:
- path: /my/path
backend:
serviceName: my-svc
servicePort: 8080

Using the values.yaml file and Go templating, you would simply add the service name and any other key values to the values.yaml file and then replace them in both manifests with the variable. For examples of using Go templating, see Go Templating and Harness Variables in Configuration Files.


How did we do?