Docker, Kubernetes, and Helm Deployment

Updated 1 week ago by Michael Cretzman

This guide will walk you through deploying a Docker image to a Kubernetes cluster using a Helm chart. This deployment scenario is very popular and a walkthrough of all the steps involved will help you set up this scenario in Harness for your own microservices and apps.

For detailed information on using Kubernetes with Harness, see Kubernetes and Harness FAQ.

You can perform all of the steps in this guide using free accounts. You will need a Docker Hub account and a Google Cloud Platform account. Both offer free accounts.

Estimated Time to Complete: 30 minutes.

Intended Audience

If you are entirely new to Harness, please see the Harness Quick Start Setup Guide.

What Are We Going to Do?

This guide walks you through deploying a publicly available Docker image of NGINX to a Google Cloud Platform (GCP) Kubernetes cluster using a publicly available Bitnami Helm chart. Basically, we do the following:

  • Docker - Pull a Docker image of NGINX from Docker Hub.
  • Helm - Use a Bitnami Helm chart for NGINX from their Github repo and define the Kubernetes service and deployment rules.
  • Kubernetes - Deploy to a GCP Kubernetes cluster that is configured with Helm and Tiller.

Sound fun? Let's get started.

What Are We Not Going to Do?

This is a simple guide that covers the basics of deploying Docker images to Kubernetes using Helm. It does not cover the following:

What Harness Needs Before You Begin

The following are required to deploy to Kubernetes using Helm via Harness:

  • An account with a Docker Artifact Server you can connect to Harness, such as Docker Hub.
  • An account with a Kubernetes provider you can connect to Harness, such as Google Cloud Platform.
  • Kubernetes Cluster with Helm and Tiller installed and running on one pod.
  • Helm chart hosted on a server accessible with anonymous access.
  • Harness Delegate installed that can connect to your Artifact Server and Cloud Provider.

We will walk you through the process of setting up Harness with connections to the artifact server and cloud provider, specifications for the Kubernetes cluster, commands for setting up Helm and Tiller on your Kubernetes cluster, and provide examples of a working Helm chart template.

Permissions for Connections and Providers

You connect Docker registries and Kubernetes clusters with Harness using the accounts you have with those providers. The following list covers the permissions required for the Docker, Kubernetes, Helm components.

  • Docker:
    • Read permissions for the Docker repository - The Docker registry you use as an Artifact Server in Harness must have Read permissions for the Docker repository.
    • List images and tags, and pull images - The user account you use to connect the Docker registry must be able to perform the following operations in the registry: List images and tags, and pull images. If you have a Docker Hub account, you can access the NGINX Docker image we use in this guide.
  • Kubernetes Cluster:
    • For Google Cloud Platform (GCP), please see Add Cloud Providers. We use the free GCP account in this guide and will describe the required project permissions.
    • For a cluster or provider such as OpenShift, please see Kubernetes Cluster.
  • Helm:
    • URL for the Helm chart - For this guide, we use a publicly available Helm chart for NGINX from Bitnami, hosted on their Github account. You do not need a Github account.
    • Helm and Tiller - Helm and Tiller must be installed and running in your Kubernetes cluster. This guide walks you through setting this up.

For a list of all of the permissions and network requirements for connecting Harness to providers, see Connectivity and Permissions Requirements.

Connections and Providers Setup

This section describes how to set up Docker and Kubernetes with Harness, and what the requirements are for using Helm.

Docker Artifact Server

You can add a Docker repository, such as Docker Hub, as an Artifact Server in Harness. Then, when you create a Harness service, you specify the Artifact Server and artifact(s) to use for deployment.

For this guide, we will be using a publicly available Docker image of NGINX, hosted on Docker Hub at hub.docker.com/_/nginx/. You will need to set up or use an existing Docker Hub account to use Docker Hub as a Harness Artifact Server. To set up a free account with Docker Hub, see Docker Hub.

To specify a Docker repository as an Artifact Server, do the following:

  1. In Harness, click Setup.
  2. Click Connectors. The Connectors page appears.
  3. Click Artifact Servers, and then click Add Artifact Server. The Artifact Servers dialog appears.
  4. In Type, click Docker Registry. The dialog changes for the Docker Registry account.
  5. In Docker Registry URL, enter the URL for the Docker Registry (for Docker Hub, https://registry.hub.docker.com/v2/).
  6. Enter a username and password for the provider (for example, your Docker Hub account).
  7. Click SUBMIT. The artifact server is displayed.

Kubernetes Cluster

For a Cloud Provider in Harness, you can specify a Kubernetes-supporting Cloud platform, such as Google Cloud Platform and OpenShift, or your own Kubernetes Cluster, and then define the deployment environment for Harness to use.

For this guide, we will use the Google Cloud Platform Kubernetes Engine. The default configuration for a Kubernetes cluster in GCP will provide you with what you need for this guide. For information on setting up a Kubernetes cluster on GCP, see Creating a Cluster from Google.

For this guide, you can use the free Google Cloud Platform account.

The specs for the Kubernetes cluster you create will depend on the microservices or apps you will deploy to it. To give you guidance on the node specs for the Kubernetes Cluster used in this guide, here is the node pool we created for our Kubernetes cluster:

For Harness deployments, a Kubernetes cluster requires the following:

  • Credentials for the Kubernetes cluster in order to add it as a Cloud Provider. If you set up GCP as a cloud provider using a GCP user account, that account should also be able to configure the Kubernetes cluster on the cloud provider.
  • The kubectl command-line tool must be configured to communicate with your cluster.
  • A kubeconfig file for the cluster. The kubeconfig file configures access to a cluster. It does not need to be named kubeconfig.

For more information, see Accessing Clusters and Configure Access to Multiple Clusters from Kubernetes.

Set Up a Kubernetes Cloud Provider

To set up a Kubernetes Cloud platform or cluster as a Harness Cloud Provider, do the following:

  1. In Harness, click Setup.
  2. Click Cloud Providers.
  3. Click Add Cloud Provider. The Cloud Provider dialog opens.

    In this example, we will add a GCP cloud provider, but there are several other provider options. In all cases, you will need to provide access keys in order for the delegate to connect to the provider.
  4. In Type, select Google Cloud Platform. Next you will upload the Google Cloud account service key file. The GCP service account you use must have the Kubernetes Engine Admin and Storage Object Viewer roles assigned to it. Here is what the permissions look like in GCP:

    When you create this role GCP will create a service and account key file in the JSON format and download it onto your computer. This is the file you will upload into Harness. For information on creating a new service key file, see Creating and Managing Service Account Keys from Google.
  5. Click Choose File, select your service and account key file and click Open. The Google Cloud Account Name is automatically entered.
  6. Click SUBMIT. The GCP cloud provider is added.

Helm

There are only two Helm requirements Harness needs to deploy to your Kubernetes cluster:

  • Helm and Tiller installed and running in one pod in your Kubernetes cluster.
  • Helm chart hosted on an accessible server. The server may allow anonymous access.
The Helm version must match the Tiller version running in the cluster (use helm version to check). If Tiller is not the latest version, then upgrade Tiller to the latest version (helm init --upgrade), or match the Helm version with the Tiller version. You can set the Helm version in the Harness Delegate YAML file using the HELM_DESIRED_VERSION environment property.
Set Up Helm on GCP Kubernetes Cluster

Setting up Helm and Tiller on a GCP Kubernetes cluster is a simple process. Log into the Google Cloud Shell for your Kubernetes cluster, and use the following commands to set up Helm.

$ curl https://raw.githubusercontent.com/kubernetes/helm/master/scripts/get | bash

$ kubectl --namespace kube-system create sa tiller

$ kubectl create clusterrolebinding tiller \
--clusterrole cluster-admin \
--serviceaccount=kube-system:tiller

$ helm init --service-account tiller

$ helm repo update

# verify that helm is installed in the cluster
$ kubectl get deploy,svc tiller-deploy -n kube-system

Here is an example of a shell session with the commands and the output:

j_doe@cloudshell:~ (project-121212)$ curl https://raw.githubusercontent.com/kubernetes/helm/master/scripts/get | bash
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 7230 100 7230 0 0 109k 0 --:--:-- --:--:-- --:--:-- 110k
Downloading https://kubernetes-helm.storage.googleapis.com/helm-v2.11.0-linux-amd64.tar.gz
Preparing to install helm and tiller into /usr/local/bin
helm installed into /usr/local/bin/helm
tiller installed into /usr/local/bin/tiller
Run 'helm init' to configure helm.

j_doe@cloudshell:~ (project-121212)$ kubectl --namespace kube-system create sa tiller
serviceaccount "tiller" created

j_doe@cloudshell:~ (project-121212)$ kubectl create clusterrolebinding tiller \
> --clusterrole cluster-admin \
> --serviceaccount=kube-system:tiller

clusterrolebinding.rbac.authorization.k8s.io "tiller" created

j_doe@cloudshell:~ (project-121212)$ helm init --service-account tiller
$HELM_HOME has been configured at /home/john_doe/.helm.
Tiller (the Helm server-side component) has been installed into your Kubernetes Cluster.
Happy Helming!

j_doe@cloudshell:~ (project-121212)$ helm repo update
Hang tight while we grab the latest from your chart repositories...
...Skip local chart repository
...Successfully got an update from the "stable" chart repository
Update Complete. ⎈ Happy Helming!⎈

j_doe@cloudshell:~ (project-121212)$ kubectl get deploy,svc tiller-deploy -n kube-system
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
deployment.extensions/tiller-deploy 1 1 1 1 20s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/tiller-deploy ClusterIP 10.63.251.235 <none> 44134/TCP 20s

Helm Chart

In this guide, we will be using a simple Helm chart template for NGINX created by Bitnami. The Helm chart sets up Kubernetes for a Docker image of NGINX. There are three main files in the Helm chart template:

  • svc.yaml - Defines the manifest for creating a service endpoint for your deployment.
  • deployment.yaml - Defines the manifest for creating a Kubernetes deployment.
  • vhost.yaml - ConfigMap used to store non-confidential data in key-value pairs.

The Helm chart is pulled from the Bitnami Github repository. You can view all the chart files there, but the key templates are included below.

svc.yaml
apiVersion: v1
          kind: Service
          metadata:
            name: {{ template "fullname" . }}
            labels:
              app: {{ template "fullname" . }}
              chart: "{{ .Chart.Name }}-{{ .Chart.Version }}"
              release: "{{ .Release.Name }}"
              heritage: "{{ .Release.Service }}"
          spec:
            type: {{ .Values.serviceType }}
            ports:
            - name: http
              port: 80
              targetPort: http
            selector:
              app: {{ template "fullname" . }}
          

deployment.yaml
apiVersion: extensions/v1beta1
          kind: Deployment
          metadata:
            name: {{ template "fullname" . }}
            labels:
              app: {{ template "fullname" . }}
              chart: "{{ .Chart.Name }}-{{ .Chart.Version }}"
              release: "{{ .Release.Name }}"
              heritage: "{{ .Release.Service }}"
          spec:
            selector:
              matchLabels:
                app: {{ template "fullname" . }}
                release: "{{ .Release.Name }}"
            replicas: 1
            template:
              metadata:
                labels:
                  app: {{ template "fullname" . }}
                  chart: "{{ .Chart.Name }}-{{ .Chart.Version }}"
                  release: "{{ .Release.Name }}"
                  heritage: "{{ .Release.Service }}"
              spec:
                {{- if .Values.image.pullSecrets }}
                imagePullSecrets:
                {{- range .Values.image.pullSecrets }}
                  - name: {{ . }}
                {{- end}}
                {{- end }}
                containers:
                - name: {{ template "fullname" . }}
                  image: "{{ .Values.image.registry }}/{{ .Values.image.repository }}:{{ .Values.image.tag }}"
                  imagePullPolicy: {{ .Values.image.pullPolicy | quote }}
                  ports:
                  - name: http
                    containerPort: 8080
                  livenessProbe:
                    httpGet:
                      path: /
                      port: http
                    initialDelaySeconds: 30
                    timeoutSeconds: 5
                    failureThreshold: 6
                  readinessProbe:
                    httpGet:
                      path: /
                      port: http
                    initialDelaySeconds: 5
                    timeoutSeconds: 3
                    periodSeconds: 5
                  volumeMounts:
                  {{- if .Values.vhost }}
                  - name: nginx-vhost
                    mountPath: /opt/bitnami/nginx/conf/vhosts
                  {{- end }}
                volumes:
                {{- if .Values.vhost }}
                - name: nginx-vhost
                  configMap:
                    name: {{ template "fullname" . }}
                {{- end }}
          

Add Docker Image Service with Kubernetes and Helm Chart

The following procedure creates a Harness application and adds a service that uses a Docker image and Helm chart for a Kubernetes deployment.

An application in Harness represents a logical group of one or more entities, including Services, Environments, Workflows, Pipelines, Triggers, and Infrastructure Provisioners. Applications organize all of the entities and configurations in Harness CI/CD. For more information, see Application Checklist.

To add the Harness application and service, do the following:

Create the Harness Application

  1. In Harness, click Setup.
  2. Click Add Application. The Application dialog appears.
  3. Give your application a name that describes your microservice or app. For the purposes of this guide, we use the name Docker-Helm.
  4. Click SUBMIT. The new application is added.
  5. Click the application name to open the application. The application entities are displayed.

Add the Docker Image Service to the Application

  1. In your new application, click Services. The Services page appears.
  2. In the Services page, click Add Service. The Service dialog appears.
  3. In Name, enter a name for your microservice. For example, if your microservice were the checkout service in a Web application, you could name it checkout. For this guide, we will use Docker-Helm.
  4. In Artifact Type, select Docker Image. Harness will create a service that is designed for Docker images and Kubernetes deployments.
  5. Click SUBMIT. The new service is added. Let's look at where Docker, Kubernetes, and Helm are configured in the Service page:


The following table describes the different sections.

Component

Section in Service

Description

Docker

Artifact Source

You add a pointer to the Docker image location you want to deploy.

Kubernetes

Kubernetes Container Specification

You do not use Kubernetes Container Specification for a Helm deployment because you are using Helm to define the container. Harness obtains all the details in the Helm chart, whether it's the controller type, etc.

In a Kubernetes deployment without Helm, you enter the Kubernetes specification to use in Kubernetes Container Specification. You can use the Container Specification dialog or paste in the entire YAML for your Kubernetes cluster.

Helm

Helm Chart Specification

You enter the Helm chart repo and chart to use.

Typically, this is the only Helm configuration needed in a Harness service.

This is the easiest way to point to your chart, but you can add the chart info in Helm Values YAML instead.

Helm

Helm Values YAML

You can enter the contents of a Helm values.yaml file here. This file contains the default values for a chart.

If you want to point to your Helm cart here, you can simply add the YAML:

harness:
 helm:
   chart:
     name: nginx
     version: 1.0.1
     url: https://charts.bitnami.com/bitnami

Add the Docker Artifact Source

  1. In the new service, click Add Artifact Source, and select Docker Registry. There are a number of artifact sources you can use. For more information, see Add a Docker Image Server. The Docker Registry dialog appears.
  2. In Name, let Harness generate a name for the source.
  3. In Source Server, select the Artifact Server you added earlier in this guide. We are using Docker Hub in this guide.
  4. In Docker Image Name, enter the image name. Official images in public repos such as Docker Hub need the label library. For example, library/nginx. For this guide, we will use Docker Hub and the publicly available NGINX at library/nginx.
  5. Click SUBMIT. The Artifact Source is added.

Add the Helm Chart

As explained earlier, you have two options when entering in the Helm chart info. The Helm Chart Specifications or the Helm Values YAML. For this guide, we will use the Helm Chart Specifications.

  1. Expand the Helm section of the service.
  2. Click Chart Specifications. The Helm Chart Specifications dialog appears.
  3. In Chart URL, enter the URL for the server hosting the Helm chart. For this guide, we will use https://charts.bitnami.com/bitnami. If you want to use a different method of installing a chart, see the table below this procedure.
  4. In Chart Name, enter the name of the chart in that repo. For this guide, we use nginx.
  5. In Chart Version, enter the chart version to use. This is found in the Chart.yaml version label. For this guide, we will use 1.0.1. If you leave this field empty Harness gets the latest chart.
  6. Click SUBMIT. The chart specification is added to the service.

There are 5 different ways a chart can be installed. The following table describes how to fill out the Helm Chart Specifications dialog for each way. N/A is used to indicate that you enter nothing.

Installation Method

Example

Field Values

Chart reference and repo URL.

This is the most common method.

helm install --repo https://example.com/charts/ nginx

Chart URL: https://example.com/charts/

Chart Name: nginx

Chart reference.

helm install stable/nginx

Chart URL: N/A

Chart Name: stable/nginx

Path to a packaged chart.

In this method, the chart file is located on the same pod as the Harness delegate.

You can add a Delegate Profile that copies the chart from a repo to the pod. For more information, see Delegate Profiles.

helm install ./nginx-1.2.3.tgz

Chart URL: N/A

Chart Name: dir_path_to_delegate/nginx

Path to an unpacked chart directory.

In this method, the chart file is located on the same pod as the Harness delegate.

You can add a Delegate Profile that copies the chart from a repo to the pod. For more information, see Delegate Profiles.

helm install ./nginx

Chart URL: N/A

Chart Name: dir_path_to_delegate/nginx

Absolute URL.

helm install https://example.com/charts/nginx-1.2.3.tgz

Chart URL: N/A

Chart Name: https://example.com/charts/nginx-1.2.3.tgz

For Helm, that's it. You don't have to do any further configuration to the service. Harness will use the chart you specified to configure the Kubernetes cluster.

Now you can define the deployment environment and workflow for the deployment.

Before we do that, let's look at the Helm Value YAML section. You can enter the YAML for your values.yaml file. The values.yaml file contains the default values for a chart. You will typically have this file in the repo for your chart, but you can add it in the Harness service instead.

The Helm Value YAML dialog has the following placeholders that save you from having to enter in some variables:

  • ${NAMESPACE} - Replaced with the Kubernetes namespace, such as default. You will specify the namespace of the cluster when adding the Harness environment service infrastructure later, and the placeholder will be replaced with that namespace.
  • ${DOCKER_IMAGE_NAME} - Replaced with the Docker image name.
  • ${DOCKER_IMAGE_TAG} - Replaced with the Docker image tag.

For information about how values from different source are compiled at runtime, see Helm Values Priority.

Define Deployment Environment

After you have set up the Harness service for your Helm deployment, you can add a Harness environment that lists the Cloud Provider and Kubernetes cluster where Harness will deploy your Docker image. You will define the service infrastructure for Harness to use when deploying the Docker image.

The following procedure creates a Harness environment where you can deploy your Docker image. For this guide, we will be using a Kubernetes cluster on Google Cloud Platform for the deployment environment.

To add a Harness environment, do the following:

Create a New Harness Environment

  1. In your Harness application, click Environments. The Environments page appears.
  2. Click Add Environment. The Environment dialog appears.
  3. In Name, enter a name that describes the deployment environment, for example, GCP-K8S-Helm.
  4. In Environment Type, select Non-Production.
  5. Click SUBMIT. The new Environment page appears.

Add a Service Infrastructure

You define the Kubernetes cluster to use for deployment as a Service Infrastructure. For this guide, we will use the GCP Cloud Provider you added and the Kubernetes cluster with Helm installed.

To add a service infrastructure, do the following:

  1. In the Harness environment, click Add Service Infrastructure. The Service Infrastructure dialog appears.
  2. In Service, select the Harness service you created earlier for your Docker image.
  3. In Deployment Type, select Helm.
  4. In Cloud Provider, select the GCP Cloud Provider you added earlier. The dialog will look something like this:
  5. Click Next. The Configuration section is automatically populated with the clusters located using the Cloud Provider connection. If the Cluster Name drop-down is taking a long time to load, check the connectivity to the Cloud Provider of the host running the Harness delegate.
  6. In Cluster Name, select the cluster you created for this deployment.
  7. In Namespace, select the namespace of the Kubernetes cluster. Typically, this is default. If you used the Helm Value YAML dialog placeholder ${NAMESPACE}, the namespace you select here will replace that placeholder.

    When you are finished, the dialog will look something like this:
  8. Click SUBMIT. The new service infrastructure is added to the Harness environment.

That is all you have to do to set up the deployment environment in Harness. You have the Docker image service and deployment environment set up. Now you can create the deployment workflow in Harness.

Override Service Helm Values

In the Harness environment, you can override Helm values specified in the Harness service. This can ensure that the Helm values used in the environment are consistent even if multiple services with different Helm values use that environment.

To override a service Helm value, do the following:

  1. In the Harness environment, click Add Configuration Overrides. The Service Configuration Override dialog appears.
  2. In Service, click the name of the service you want to override, or click All Services. The dialog changes to provide Override Type options.
  3. Click Helm Value YAML. A text area appears where you can paste an entire value.yaml file or simply override one or more values, such as a Docker image name. Click SUBMIT when you are finished.

For more information, see Helm Values Priority.

Workflows with Helm

Once you have added the Harness service and environment you can add a Harness workflow. Workflows manage how your Harness service is deployed, verified, and rolled back, among other important phases. There are many different types of workflows, from Basic to Canary, and Blue/Green.

Helm deployments use a Basic workflow that simply puts the Docker image on the Kubernetes cluster built using the Helm chart.

For more information about workflows, see Add a Workflow, Canary Deployment, and Blue/Green Deployment.

To add the workflow, do the following:

Create the Workflow

  1. In your Harness application, click Workflows.
  2. On the Workflows page, click Add Workflow. The Workflow dialog appears.
  3. In Name, give your workflow a name that describes its purpose, such as NGINX-K8s-Helm.
  4. In Workflow Type, select Basic Deployment. Helm deployments are Basic deployments, unlike Canary or Blue/Green. They are single phase deployments where each deployment is installed or upgraded. You can create multiple Helm deployments and add them to a Harness pipeline. For more information, see Add a Pipeline.
  5. In Environment, select the environment you created earlier in this guide.
  6. In Service, select the service you added earlier in this guide.
  7. In Service Infrastructure, select the service infrastructure you added to your environment earlier in this guide. When you are done, the dialog will look something like this:
  8. Click SUBMIT. The workflow is displayed.

Harness creates all the steps needed to deploy the service to the service infrastructure.

You can see that one workflow step, Helm Deploy, is incomplete.

Steps are marked incomplete if they need additional input from you. To complete this step, see the following section.

Helm Deploy Step

In your workflow, click the Helm Deploy step. The Helm Deploy dialog appears.

The Helm Deploy step has a few options you can use to manage how Helm is used in the deployment.

Helm Release Name

In the Helm Deploy Step, you need to add a Helm release name.

From the Helm docs:

A Release is an instance of a chart running in a Kubernetes cluster. One chart can often be installed many times into the same cluster. And each time it is installed, a new release is created. Consider a MySQL chart. If you want two databases running in your cluster, you can install that chart twice. Each one will have its own release, which will in turn have its own release name.

Since Helm requires release names to be unique across the cluster, Harness generates a unique identifier with the variable ${infra.helm.shortId}. You can use this variable as a prefix or suffix for the release name. We recommend the following release name:

${service.name}-${env.name}-${infra.helm.shortId}

If the service name is NGINX and the environment name is GCP-K8s-Helm, then the release name will be nginx-gcp-k8s-helm-rjogwmu, where rjogwmu is generated by ${infra.helm.shortId}.

Deployment Steady State Timeout

For Deployment steady state timeout, you can leave the default of 10 minutes. It is unlikely that deployment would ever take longer than 10 minutes.

Git Connector

For Git connector, you can specify values or a full values.yaml file in Git repo and Harness will fetch the values during runtime. For information on how Harness merges values from different source for the values.yaml, see Helm Values Priority.

To use a Git connector, you need to add a Git repo as a Harness Source Repo provider. For more information, see Add Source Repo Providers.

To use a Git connector in the Helm Deploy step, do the following:

  1. In Git connector, select the Git repo you added as a Source Repo.
  2. Select either Use specific commit ID and enter in the commit ID, or select Use latest commit from branch and enter in the branch name.
  3. In File path, enter the path to the values.yaml file in the repo, including the repo name, like helm/values.yaml.

Here's an example of what the Git connector might look like:

Completed Helm Deploy Step

When you are done, the typical Helm Deploy dialog will look something like this:

Only the Helm release name is required.

Click SUBMIT and your workflow is complete. You can look or modify the default rollback steps and other deployment strategies in the workflow (for more information, see Add a Workflow), but for this guide, the workflow is complete and you can now deploy it. See the next section for deployment steps.

Helm Deployments

The following procedure deploys the workflow you created in this guide.

Before deploying the workflow, ensure all Harness delegates that can reach the resources used in the workflow are running. In Harness, click Setup, and then click Harness Installations.

To deploy your workflow, do the following:

  1. In your workflow, click Deploy.









    The Start New Deployment dialog appears.
  2. In Notes, enter information about the deployment that others should know. Harness records all the important details, and maintains the records of each deployment, but you might need to share some information about your deployment.
  3. Click SUBMIT. The Deployments page appears, and displays the deployment in real time.

The deployment was successful! Now let's look further at the Helm deployment.

Click Phase 1. You will the details of the phase, including the workflow entities, listed.

Click Phase 1 to expand it and see Deploy Containers. Expand Deploy Containers and click the Helm Deploy step you set up in the workflow. The details for the step are displayed, along with the command output:

Viewing Deployment in the Log

Let's look through the deployment log and see how your Docker image was deployed to your cluster using Helm.

First, we check to see if the Helm chart repo has already been added and, if not, add it from https://charts.bitnami.com/bitnami.

INFO   2018-10-09 16:59:51    Adding helm repository https://charts.bitnami.com/bitnami
INFO 2018-10-09 16:59:51 Checking if the repository has already been added
INFO 2018-10-09 16:59:51 Repository not found
INFO 2018-10-09 16:59:51 Adding repository https://charts.bitnami.com/bitnami with name examplefordoc-nginx
INFO 2018-10-09 16:59:51 Successfully added repository https://charts.bitnami.com/bitnami with name examplefordoc-nginx

Next, we look to see if a release with the same release name exists:

INFO   2018-10-09 16:59:51    Installing
INFO 2018-10-09 16:59:51 List all existing deployed releases for release name: nginx-gcp-k8s-helm-rjogwmu
INFO 2018-10-09 16:59:51 Release: "nginx-gcp-k8s-helm-rjogwmu" not found

This is the release name generated from our Helm Deploy step name of ${service.name}-${env.name}-${infra.helm.shortId}.

Since this is the first deployment, an existing release with that name is not found, and a new release occurs.

INFO   2018-10-09 16:59:52    No previous deployment found for release. Installing chart
INFO 2018-10-09 16:59:54 NAME: nginx-gcp-k8s-helm-rjogwmu
INFO 2018-10-09 16:59:54 LAST DEPLOYED: Tue Oct 9 23:59:53 2018
INFO 2018-10-09 16:59:54 NAMESPACE: default
INFO 2018-10-09 16:59:54 STATUS: DEPLOYED

You can see the Kubernetes events in the logs as the cluster is created.

INFO   2018-10-09 16:59:54    NAME                                       READY  STATUS             RESTARTS  AGE
INFO 2018-10-09 16:59:54 nginx-gcp-k8s-helm-rjogw-565bc8495f-w5tzs 0/1 ContainerCreating 0 0s
INFO 2018-10-09 16:59:54
INFO 2018-10-09 16:59:54 Deployed Controllers [2]:
INFO 2018-10-09 16:59:54 Kind:Deployment, Name:nginx-gcp-k8s-helm-rjogw (desired: 1)
INFO 2018-10-09 16:59:54 Kind:ReplicaSet, Name:nginx-gcp-k8s-helm-rjogw-565bc8495f (desired: 1)
INFO 2018-10-09 16:59:54
INFO 2018-10-09 16:59:54 **** Kubernetes Controller Events ****
...
INFO 2018-10-09 16:59:54 Desired number of pods reached [1/1]
...
INFO 2018-10-09 16:59:54 Pods are updated with image [docker.io/bitnami/nginx:1.14.0-debian-9] [1/1]
INFO 2018-10-09 16:59:54 Waiting for pods to be running [0/1]
INFO 2018-10-09 17:00:05
...
INFO 2018-10-09 17:00:05 **** Kubernetes Pod Events ****
INFO 2018-10-09 17:00:05 Pod: nginx-gcp-k8s-helm-rjogw-565bc8495f-w5tzs
INFO 2018-10-09 17:00:05 - pulling image "docker.io/bitnami/nginx:1.14.0-debian-9"
INFO 2018-10-09 17:00:05 - Successfully pulled image "docker.io/bitnami/nginx:1.14.0-debian-9"
INFO 2018-10-09 17:00:05 - Created container
INFO 2018-10-09 17:00:05 - Started container
INFO 2018-10-09 17:00:05
INFO 2018-10-09 17:00:05 Pods are running [1/1]
INFO 2018-10-09 17:00:05 Waiting for pods to reach steady state [0/1]

Lastly, but most importantly, confirm the steady state for the pods to ensure deployment was successful.

INFO   2018-10-09 17:00:20    Pods have reached steady state [1/1]
INFO 2018-10-09 17:00:20 Pod [nginx-gcp-k8s-helm-rjogw-565bc8495f-w5tzs] is running. Host IP: 10.128.0.24. Pod IP: 10.60.1.8
INFO 2018-10-09 17:00:20 Command finished with status SUCCESS

Helm Rollbacks

Harness add a revision number for each deployment. If a new deployment fails, Harness rolls back to the previous deployment revision number. You can see the Revision number in the log of a deployment. Here is sample from a log after an upgrade:

INFO   2018-10-11 14:43:09    Installing
INFO 2018-10-11 14:43:09 List all existing deployed releases for release name: nginx-gcp-k8s-helm-rjogwmu
INFO 2018-10-11 14:43:09 REVISION UPDATED STATUS CHART DESCRIPTION
INFO 2018-10-11 14:43:09 1 Tue Oct 9 23:59:53 2018 SUPERSEDED nginx-1.0.1 Install complete
INFO 2018-10-11 14:43:09 2 Thu Oct 11 18:27:35 2018 SUPERSEDED nginx-1.0.1 Upgrade complete
INFO 2018-10-11 14:43:09 3 Thu Oct 11 21:30:24 2018 DEPLOYED nginx-1.0.1 Upgrade complete

The REVISION column lists the revision number. Note the revision number 3 as the last successful version deployed. We will now fail a deployment that would be revision 4 and you will see Harness roll back to number 3.

Here is an example where a failure has been initiated using an erroneous HTTP call (Response Code 500) to demonstrate the rollback behavior:

To experiment with rollbacks, you can simply add a step to your workflow that will fail.

The failed deployment section is red, but the Rollback Phase 1 step is green, indicating that rollback has been successful. If we expand Rollback Phase 1, we can see the rollback information in the Helm Rollback step details:

The failed version is Release Old Version 4 and the Release rollback Version is revision 3, the last successful version. The rollback version now becomes the new version, Release New Version 5.

Let's look at the log of the rollback to see Harness rolling back successfully.

INFO   2018-10-11 14:43:22    Rolling back
INFO 2018-10-11 14:43:23 Rollback was a success! Happy Helming!
INFO 2018-10-11 14:43:23
INFO 2018-10-11 14:43:24 Deployed Controllers [2]:
INFO 2018-10-11 14:43:24 Kind:Deployment, Name:nginx-gcp-k8s-helm-rjogw (desired: 1)
INFO 2018-10-11 14:43:24 Kind:ReplicaSet, Name:nginx-gcp-k8s-helm-rjogw-565bc8495f (desired: 1)
INFO 2018-10-11 14:43:26 Desired number of pods reached [1/1]
INFO 2018-10-11 14:43:26 Pods are updated with image [docker.io/bitnami/nginx:1.14.0-debian-9] [1/1]
INFO 2018-10-11 14:43:26 Pods are running [1/1]
INFO 2018-10-11 14:43:26 Pods have reached steady state [1/1]
INFO 2018-10-11 14:43:28 Pod [nginx-gcp-k8s-helm-rjogw-565bc8495f-w5tzs] is running. Host IP: 10.128.0.24. Pod IP: 10.60.1.8
INFO 2018-10-11 14:43:28 Command finished with status SUCCESS

When the next deployment is successful, you can see a record of the rollback release:

INFO   2018-10-11 15:38:16    Installing
INFO 2018-10-11 15:38:16 List all existing deployed releases for release name: nginx-gcp-k8s-helm-rjogwmu
INFO 2018-10-11 15:38:16 REVISION UPDATED STATUS CHART DESCRIPTION
INFO 2018-10-11 15:38:16 1 Tue Oct 9 23:59:53 2018 SUPERSEDED nginx-1.0.1 Install complete
INFO 2018-10-11 15:38:16 2 Thu Oct 11 18:27:35 2018 SUPERSEDED nginx-1.0.1 Upgrade complete
INFO 2018-10-11 15:38:16 3 Thu Oct 11 21:30:24 2018 SUPERSEDED nginx-1.0.1 Upgrade complete
INFO 2018-10-11 15:38:16 4 Thu Oct 11 21:43:12 2018 SUPERSEDED nginx-1.0.1 Upgrade complete
INFO 2018-10-11 15:38:16 5 Thu Oct 11 21:43:22 2018 DEPLOYED nginx-1.0.1 Rollback to 3

The Description for the last release, Revision 5, states that it was a Rollback to 3.

Upgrading Deployments

When you run a Helm deployment a second time it will upgrade your Kubernetes cluster. The upgrade is performed in a rolling fashion that does not cause downtime. Essentially, and gracefully, the upgrade deletes old pods and adds new pods with new version of artifacts.

Let's look at the deployment log from an upgrade to see how Harness handles it.

First, Harness looks for all existing Helm chart releases with the same name and upgrades them:

INFO   2018-10-11 14:30:22    Installing
INFO 2018-10-11 14:30:22 List all existing deployed releases for release name: nginx-gcp-k8s-helm-rjogwmu
INFO 2018-10-11 14:30:24 REVISION UPDATED STATUS CHART DESCRIPTION
INFO 2018-10-11 14:30:24 1 Tue Oct 9 23:59:53 2018 SUPERSEDED nginx-1.0.1 Install complete
INFO 2018-10-11 14:30:24 2 Thu Oct 11 18:27:35 2018 DEPLOYED nginx-1.0.1 Upgrade complete
INFO 2018-10-11 14:30:24
INFO 2018-10-11 14:30:24 Previous release exists for chart. Upgrading chart
INFO 2018-10-11 14:30:25 Release "nginx-gcp-k8s-helm-rjogwmu" has been upgraded. Happy Helming!
INFO 2018-10-11 14:30:25 LAST DEPLOYED: Thu Oct 11 21:30:24 2018
INFO 2018-10-11 14:30:25 NAMESPACE: default
INFO 2018-10-11 14:30:25 STATUS: DEPLOYED

Then it upgrades the cluster pods with the new Docker image of NGINX:

INFO   2018-10-11 14:30:25    Deployed Controllers [2]:
INFO 2018-10-11 14:30:25 Kind:Deployment, Name:nginx-gcp-k8s-helm-rjogw (desired: 1)
INFO 2018-10-11 14:30:25 Kind:ReplicaSet, Name:nginx-gcp-k8s-helm-rjogw-565bc8495f (desired: 1)
INFO 2018-10-11 14:30:25 Desired number of pods reached [1/1]
INFO 2018-10-11 14:30:25 Pods are updated with image [docker.io/bitnami/nginx:1.14.0-debian-9] [1/1]
INFO 2018-10-11 14:30:25 Pods are running [1/1]
INFO 2018-10-11 14:30:25 Pods have reached steady state [1/1]
INFO 2018-10-11 14:30:27 Pod [nginx-gcp-k8s-helm-rjogw-565bc8495f-w5tzs] is running. Host IP: 10.128.0.24. Pod IP: 10.60.1.8
INFO 2018-10-11 14:30:27 Command finished with status SUCCESS

Helm Values Priority

Typically, the values.yaml applied to your Kubernetes cluster is a single file from the Helm chart repo, but it can also be a merger of values from different sources. This enables a key in values.yaml to get updated with different, and likely more current, values.

You can simply use a values.yaml in the Helm chart repo. There is no requirement to use multiple sources.

Values for the values.yaml can be specified in the following sources:

  • Harness service.
  • Harness environment.
  • Harness workflow via a Git connector.
  • The values.yaml file in the Helm chart repo.

Harness will merge key values from all of these sources into a values.yaml to apply to Kubernetes.

In case of conflicts, values will be overridden. Here is how values are overridden, from least to highest priority:

  1. Chart repo values.yaml has the least priority. This is the values.yaml in the chart repo you specify in the Helm Chart Specifications in the Harness service.
  2. Harness service values override chart repo values. These are values specified the Helm Values YAML in the Harness service.
  3. Harness environment values override Harness service values. These are the values you specify in the Add Configuration Overrides in a Harness environment.
  4. Harness workflow values added via a Git connector have the highest priority.

Do it All in YAML

All of the Harness configuration steps in this guide can be performed using code instead of the Harness user interface. You can view or edit the YAML for any Harness configuration by clicking the </> button on any page.

When you click the button, the Harness code editor appears:

You can edit YAML and click Save to change the configuration.

For example, here is the YAML for the workflow we set up in this guide.

harnessApiVersion: '1.0'
type: BASIC
envName: GCP-K8s-Helm
failureStrategies:
- executionScope: WORKFLOW
failureTypes:
- APPLICATION_ERROR
repairActionCode: ROLLBACK_WORKFLOW
retryCount: 0
notificationRules:
- conditions:
- FAILED
executionScope: WORKFLOW
notificationGroups:
- Default Notification Group
phases:
- type: HELM
computeProviderName: harness-exploration
daemonSet: false
infraMappingName: us-central1-a_doc-helm -GCP_Kubernetes--Google Cloud Platform- harness-exploration- default
name: Phase 1
phaseSteps:
- type: HELM_DEPLOY
name: Deploy Containers
steps:
- type: HELM_DEPLOY
name: Helm Deploy
properties:
steadyStateTimeout: 10
gitFileConfig:
connectorId: null
commitId: null
branch: null
useBranch: false
helmReleaseNamePrefix: ${service.name}-${env.name}-${infra.helm.shortId}
stepsInParallel: false
- type: VERIFY_SERVICE
name: Verify Service
stepsInParallel: false
- type: WRAP_UP
name: Wrap Up
stepsInParallel: false
provisionNodes: false
serviceName: NGINX
statefulSet: false
rollbackPhases:
- type: HELM
computeProviderName: harness-exploration
daemonSet: false
infraMappingName: us-central1-a_doc-helm -GCP_Kubernetes--Google Cloud Platform- harness-exploration- default
name: Rollback Phase 1
phaseNameForRollback: Phase 1
phaseSteps:
- type: HELM_DEPLOY
name: Deploy Containers
phaseStepNameForRollback: Deploy Containers
statusForRollback: SUCCESS
steps:
- type: HELM_ROLLBACK
name: Helm Rollback
stepsInParallel: false
- type: VERIFY_SERVICE
name: Verify Service
phaseStepNameForRollback: Deploy Containers
statusForRollback: SUCCESS
stepsInParallel: false
- type: WRAP_UP
name: Wrap Up
stepsInParallel: false
provisionNodes: false
serviceName: NGINX
statefulSet: false
templatized: false

Helm and the Kubernetes Delegate

You can set the Helm version for the Harness Kubernetes delegate to use.

The Harness Kubernetes delegate is configured and run using a YAML file that you download from Harness, as described in Delegate Installation. You can edit the YAML file and set the desired Helm version to use with the HELM_DESIRED_VERSION parameter.

Here is a sample of the Kubernetes delegate YAML file with the HELM_DESIRED_VERSION parameter in bold:

apiVersion: v1
kind: Namespace
metadata:
name: harness-delegate
...
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
...
spec:
...
template:
metadata:
labels:
harness.io/app: harness-delegate
harness.io/account: yvcrcl
harness.io/name: harness-delegate
spec:
containers:
- image: harness/delegate:latest
imagePullPolicy: Always
name: harness-delegate-instance
resources:
limits:
cpu: "1"
memory: "6Gi"
env:
...
- name: HELM_DESIRED_VERSION
value: ""

restartPolicy: Always

Next Steps

  • Pipeline and Triggers - Once you have a successful workflow, you can experiment with a Harness pipeline, which as a collection of one or more workflows, and Harness triggers, which enable you to execute a workflow or pipeline deployment using different criteria, such as when a new artifact is added to an artifact source. For more information, see Add a Pipeline and Add a Trigger.
  • Continuous Verification - Add verification steps using Splunk, SumoLogic, Elk, AppDynamics, New Relic, DynaTrace, and others to your workflow. For more information, see Continuous Verification.


How did we do?