Helm Deployments

Updated 1 week ago by Michael Cretzman

This document covers Harness Helm implementation. For Kubernetes implementation, see Kubernetes Deployments Version 2.

This guide will walk you through deploying a Docker image to a Kubernetes cluster using a Helm chart. This deployment scenario is very popular and a walkthrough of all the steps involved will help you set up this scenario in Harness for your own microservices and apps.

You can perform all of the steps in this guide using free accounts. You will need a Docker Hub account and a Google Cloud Platform account. Both offer free accounts.

Estimated Time to Complete: 30 minutes.

Intended Audience

If you are entirely new to Harness, please see the Quick Start Setup Guide.

What Are We Going to Do?

This guide walks you through deploying a publicly available Docker image of NGINX to a Google Cloud Platform (GCP) Kubernetes cluster using a publicly available Bitnami Helm chart. Basically, we do the following:

  • Docker - Pull a Docker image of NGINX from Docker Hub.
  • Helm - Use a Bitnami Helm chart for NGINX from their Github repo and define the Kubernetes service and deployment rules.
  • Kubernetes - Deploy to a GCP Kubernetes cluster that is configured with Helm and Tiller.

Sound fun? Let's get started.

What Are We Not Going to Do?

This is a simple guide that covers the basics of deploying Docker images to Kubernetes using Helm. It does not cover the following:

What Harness Needs Before You Begin

The following are required to deploy to Kubernetes using Helm via Harness:

  • An account with a Docker Artifact Server you can connect to Harness, such as Docker Hub.
  • An account with a Kubernetes provider you can connect to Harness, such as Google Cloud Platform.
  • Kubernetes Cluster with Helm and Tiller installed and running on one pod.
  • Helm chart hosted on a server accessible with anonymous access.
  • Harness Delegate installed that can connect to your Artifact Server and Cloud Provider.

We will walk you through the process of setting up Harness with connections to the artifact server and cloud provider, specifications for the Kubernetes cluster, commands for setting up Helm and Tiller on your Kubernetes cluster, and provide examples of a working Helm chart template.

Permissions for Connections and Providers

You connect Docker registries and Kubernetes clusters with Harness using the accounts you have with those providers. The following list covers the permissions required for the Docker, Kubernetes, Helm components.

  • Docker:
    • Read permissions for the Docker repository - The Docker registry you use as an Artifact Server in Harness must have Read permissions for the Docker repository.
    • List images and tags, and pull images - The user account you use to connect the Docker registry must be able to perform the following operations in the registry: List images and tags, and pull images. If you have a Docker Hub account, you can access the NGINX Docker image we use in this guide.
  • Kubernetes Cluster:
  • Helm:
    • URL for the Helm chart - For this guide, we use a publicly available Helm chart for NGINX from Bitnami, hosted on their Github account. You do not need a Github account.
    • Helm and Tiller - Helm and Tiller must be installed and running in your Kubernetes cluster. Steps for setting this up are listed below.

For a list of all of the permissions and network requirements for connecting Harness to providers, see Delegate Connection Requirements.

Harness Kubernetes Delegate

The Harness Kubernetes Delegate runs in your target deployment cluster and executes all deployment steps, such the artifact collection and kubectl commands. The Delegate makes outbound connections to the Harness Manager only.

You can install and run the Harness Kubernetes Delegate in any Kubernetes environment, but the permissions needed for connecting Harness to that environment will be different for each environment.

The simplest method is to install the Harness Delegate in your Kubernetes cluster and then set up the Harness Cloud Provider to use the same credentials as the Delegate.

For information on how to install the Delegate in a Kubernetes cluster, see Kubernetes Cluster. For an example installation of the Delegate in a Kubernetes cluster in a Cloud Platform, see Installation Example: Google Cloud Platform.

Here is a quick summary of the steps for installing the Harness Delegate in your Kubernetes cluster:

  1. In Harness, click Setup, and then click Harness Installations.
  2. Click Download Delegate and then click Kubernetes YAML.

  3. In the Delegate Setup dialog, enter a name for the Delegate, such as doc-example, and click SUBMIT. the YAML file is downloaded to your machine.

  4. Install the Delegate in your cluster. You can copy the YAML file to your cluster any way you choose, but the following steps describe a common method.
    1. In a Terminal, connect to the Kubernetes cluster, and then use the same terminal to navigate to the folder where you downloaded the Harness Delegate YAML file. For example, cd ~/Downloads.
    2. Extract the YAML file: tar -zxvf harness-delegate-kubernetes.tar.gz.
    3. Navigate to the harness-delegate folder that was created:

      cd harness-delegate-kubernetes
    4. Paste the following installation command into the Terminal and press enter:

      kubectl apply -f harness-delegate.yaml

      You will see the following output (this Delegate is named doc-example):

      namespace/harness-delegate created

      clusterrolebinding.rbac.authorization.doc-example/harness-delegate-cluster-admin created

      statefulset.apps/doc-example-lnfzrf created
      Run this command to verify that the Delegate pod was created:

      kubectl get pods -n harness-delegate

      You will see output with the status Pending. The Pending status simply means that the cluster is still loading the pod.
      Wait a few moments for the cluster to finish loading the pod and for the Delegate to connect to Harness Manager.
      In Harness Manager, in the Harness Installations page, the new Delegate will appear. You can refresh the page if you like.

Connections and Providers Setup

This section describes how to set up Docker and Kubernetes with Harness, and what the requirements are for using Helm.

Docker Artifact Server

You can add a Docker repository, such as Docker Hub, as an Artifact Server in Harness. Then, when you create a Harness service, you specify the Artifact Server and artifact(s) to use for deployment.

For this guide, we will be using a publicly available Docker image of NGINX, hosted on Docker Hub at hub.docker.com/_/nginx/. You will need to set up or use an existing Docker Hub account to use Docker Hub as a Harness Artifact Server. To set up a free account with Docker Hub, see Docker Hub.

To specify a Docker repository as an Artifact Server, do the following:

  1. In Harness, click Setup.
  2. Click Connectors. The Connectors page appears.
  3. Click Artifact Servers, and then click Add Artifact Server. The Artifact Servers dialog appears.
  4. In Type, click Docker Registry. The dialog changes for the Docker Registry account.
  5. In Docker Registry URL, enter the URL for the Docker Registry (for Docker Hub, https://registry.hub.docker.com/v2/).
  6. Enter a username and password for the provider (for example, your Docker Hub account).
  7. Click SUBMIT. The artifact server is displayed.

Single GCR Docker Registry across Multiple Projects

In this document, we perform a simple set up using Docker Registry. Another common artifact server for Kubernetes deployments is GCR (Google Container Registry), also supported by Harness.

An important note about using GCR is that if your GCR and target GKE Kubernetes cluster are in different GCP projects, Kubernetes might not have permission to pull images from the GCR project. For information on using a single GCR Docker registry across projects, see Using single Docker repository with multiple GKE projects from Medium and the Granting users and other projects access to a registry section from Configuring access control by Google.

Kubernetes Cluster

For a Cloud Provider in Harness, you can specify a Kubernetes-supporting Cloud platform, such as Google Cloud Platform and OpenShift, or your own Kubernetes Cluster, and then define the deployment environment for Harness to use.

For this guide, we will use the Kubernetes Cluster Cloud Provider. If you use GCP, see Creating a Cluster from Google.

The specs for the Kubernetes cluster you create will depend on the microservices or apps you will deploy to it. To give you guidance on the node specs for the Kubernetes Cluster used in this guide, here is a node pool created for a Kubernetes cluster in GCP:

For Harness deployments, a Kubernetes cluster requires the following:

  • Credentials for the Kubernetes cluster in order to add it as a Cloud Provider. If you set up GCP as a cloud provider using a GCP user account, that account should also be able to configure the Kubernetes cluster on the cloud provider.
  • The kubectl command-line tool must be configured to communicate with your cluster.
  • A kubeconfig file for the cluster. The kubeconfig file configures access to a cluster. It does not need to be named kubeconfig.

For more information, see Accessing Clusters and Configure Access to Multiple Clusters from Kubernetes.

Set Up a Kubernetes Cluster Cloud Provider

To set up a Kubernetes Cluster Cloud Provider, do the following:

  1. In Harness, click Setup.
  2. Click Cloud Providers.
  3. Click Add Cloud Provider. The Cloud Provider dialog opens.
    In this example, we will add a Kubernetes Cluster Cloud Provider, but there are several other provider options. In some cases, you will need to provide access keys in order for the delegate to connect to the provider.
  4. In Type, select Kubernetes Cluster.
  5. In Display Name, enter a name for the Cloud Provider.
  6. Click the option Inherit Cluster Details from selected Delegate to use the credentials of the Delegate you installed in your cluster.
  7. In Delegate Name, select the name of the Delegate installed in your cluster. When you are done, the dialog will look something like this:

  8. Click SUBMIT. The Kubernetes Cluster Cloud Provider is added.

Helm

There are only two Helm requirements Harness needs to deploy to your Kubernetes cluster:

  • Helm and Tiller installed and running in one pod in your Kubernetes cluster.
  • Helm chart hosted on an accessible server. The server may allow anonymous access.
The Helm version must match the Tiller version running in the cluster (use helm version to check). If Tiller is not the latest version, then upgrade Tiller to the latest version (helm init --upgrade), or match the Helm version with the Tiller version. You can set the Helm version in the Harness Delegate YAML file using the HELM_DESIRED_VERSION environment property. For more information, see Helm and the Kubernetes Delegate in this document.
Set Up Helm on a Kubernetes Cluster

Setting up Helm and Tiller on a Kubernetes cluster is a simple process. Log into the cluster (for example, the Google Cloud Shell), and use the following commands to set up Helm.

$ curl https://raw.githubusercontent.com/kubernetes/helm/master/scripts/get | bash

$ kubectl --namespace kube-system create sa tiller

$ kubectl create clusterrolebinding tiller \
--clusterrole cluster-admin \
--serviceaccount=kube-system:tiller

$ helm init --service-account tiller

$ helm repo update

# verify that helm is installed in the cluster
$ kubectl get deploy,svc tiller-deploy -n kube-system

Here is an example of a shell session with the commands and the output:

j_doe@cloudshell:~ (project-121212)$ curl https://raw.githubusercontent.com/kubernetes/helm/master/scripts/get | bash
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 7230 100 7230 0 0 109k 0 --:--:-- --:--:-- --:--:-- 110k
Downloading https://kubernetes-helm.storage.googleapis.com/helm-v2.11.0-linux-amd64.tar.gz
Preparing to install helm and tiller into /usr/local/bin
helm installed into /usr/local/bin/helm
tiller installed into /usr/local/bin/tiller
Run 'helm init' to configure helm.

j_doe@cloudshell:~ (project-121212)$ kubectl --namespace kube-system create sa tiller
serviceaccount "tiller" created

j_doe@cloudshell:~ (project-121212)$ kubectl create clusterrolebinding tiller \
> --clusterrole cluster-admin \
> --serviceaccount=kube-system:tiller

clusterrolebinding.rbac.authorization.k8s.io "tiller" created

j_doe@cloudshell:~ (project-121212)$ helm init --service-account tiller
$HELM_HOME has been configured at /home/john_doe/.helm.
Tiller (the Helm server-side component) has been installed into your Kubernetes Cluster.
Happy Helming!

j_doe@cloudshell:~ (project-121212)$ helm repo update
Hang tight while we grab the latest from your chart repositories...
...Skip local chart repository
...Successfully got an update from the "stable" chart repository
Update Complete. ⎈ Happy Helming!⎈

j_doe@cloudshell:~ (project-121212)$ kubectl get deploy,svc tiller-deploy -n kube-system
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
deployment.extensions/tiller-deploy 1 1 1 1 20s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/tiller-deploy ClusterIP 10.63.251.235 <none> 44134/TCP 20s
If you are using TLS for communication between Helm and Tiller, ensure that you use the --tls parameter with your commands. For more information, see Command Flags in Helm Deploy Step in this document, and see Using SSL Between Helm and Tiller from Helm, and the section Securing your Helm Installation in that document.
Helm Chart

In this guide, we will be using a simple Helm chart template for NGINX created by Bitnami. The Helm chart sets up Kubernetes for a Docker image of NGINX. There are three main files in the Helm chart template:

  • svc.yaml - Defines the manifest for creating a service endpoint for your deployment.
  • deployment.yaml - Defines the manifest for creating a Kubernetes deployment.
  • vhost.yaml - ConfigMap used to store non-confidential data in key-value pairs.

The Helm chart is pulled from the Bitnami Github repository. You can view all the chart files there, but the key templates are included below.

Here is an svc.yaml file:

apiVersion: v1
kind: Service
metadata:
name: {{ template "fullname" . }}
labels:
app: {{ template "fullname" . }}
chart: "{{ .Chart.Name }}-{{ .Chart.Version }}"
release: "{{ .Release.Name }}"
heritage: "{{ .Release.Service }}"
spec:
type: {{ .Values.serviceType }}
ports:
- name: http
port: 80
targetPort: http
selector:
app: {{ template "fullname" . }}

Here is a deployment.yaml file:

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: {{ template "fullname" . }}
labels:
app: {{ template "fullname" . }}
chart: "{{ .Chart.Name }}-{{ .Chart.Version }}"
release: "{{ .Release.Name }}"
heritage: "{{ .Release.Service }}"
spec:
selector:
matchLabels:
app: {{ template "fullname" . }}
release: "{{ .Release.Name }}"
replicas: 1
template:
metadata:
labels:
app: {{ template "fullname" . }}
chart: "{{ .Chart.Name }}-{{ .Chart.Version }}"
release: "{{ .Release.Name }}"
heritage: "{{ .Release.Service }}"
spec:
{{- if .Values.image.pullSecrets }}
imagePullSecrets:
{{- range .Values.image.pullSecrets }}
- name: {{ . }}
{{- end}}
{{- end }}
containers:
- name: {{ template "fullname" . }}
image: "{{ .Values.image.registry }}/{{ .Values.image.repository }}:{{ .Values.image.tag }}"
imagePullPolicy: {{ .Values.image.pullPolicy | quote }}
ports:
- name: http
containerPort: 8080
livenessProbe:
httpGet:
path: /
port: http
initialDelaySeconds: 30
timeoutSeconds: 5
failureThreshold: 6
readinessProbe:
httpGet:
path: /
port: http
initialDelaySeconds: 5
timeoutSeconds: 3
periodSeconds: 5
volumeMounts:
{{- if .Values.vhost }}
- name: nginx-vhost
mountPath: /opt/bitnami/nginx/conf/vhosts
{{- end }}
volumes:
{{- if .Values.vhost }}
- name: nginx-vhost
configMap:
name: {{ template "fullname" . }}
{{- end }}

Add Helm Service and Helm Chart

The following procedure creates a Harness Application and adds a Service that uses a Docker image and Helm chart for a Kubernetes deployment.

An application in Harness represents a logical group of one or more entities, including Services, Environments, Workflows, Pipelines, Triggers, and Infrastructure Provisioners. Applications organize all of the entities and configurations in Harness CI/CD. For more information, see Application Checklist.

To add the Harness Application and service, do the following:

Create the Harness Application

  1. In Harness, click Setup.
  2. Click Add Application. The Application dialog appears.
  3. Give your application a name that describes your microservice or app. For the purposes of this guide, we use the name Docker-Helm.
  4. Click SUBMIT. The new application is added.
  5. Click the application name to open the application. The application entities are displayed.

Add the Helm Service to the Application

  1. In your new application, click Services. The Services page appears.
  2. In the Services page, click Add Service. The Service dialog appears.
  3. In Name, enter a name for your microservice. For example, if your microservice were the checkout service in a Web application, you could name it checkout. For this guide, we will use Docker-Helm.
  4. In Deployment Type, select Helm. Harness will create a service that is designed for Helm deployments. When you are finished, the dialog will look like this:

  5. Click SUBMIT. The new service is added. Let's look at where Docker, Kubernetes, and Helm are configured in the Service page:

The following table describes the different sections.

Component

Section in Service

Description

Docker

Artifact Source

You add a pointer to the Docker image location you want to deploy.

Helm

Chart Specification

You enter the Helm chart repo and chart to use.

Typically, this is the only Helm configuration needed in a Harness service.

This is the easiest way to point to your chart, but you can add the chart info in Values YAML instead.

Helm

Values YAML

You can enter the contents of a Helm values.yaml file here. This file contains the default values for a chart.

If you want to point to your Helm cart here, you can simply add the YAML:

harness:
 helm:
   chart:
     name: nginx
     version: 1.0.1
     url: https://charts.bitnami.com/bitnami

Add the Docker Artifact Source

  1. In the new service, click AddArtifact Source, and select Docker Registry. There are a number of artifact sources you can use. For more information, see Add a Docker Image Server. The Docker Registry dialog appears.
  2. In Name, let Harness generate a name for the source.
  3. In Source Server, select the Artifact Server you added earlier in this guide. We are using Docker Hub in this guide.
  4. In Docker Image Name, enter the image name. Official images in public repos such as Docker Hub need the label library. For example, library/nginx. For this guide, we will use Docker Hub and the publicly available NGINX at library/nginx.
  5. Click SUBMIT. The Artifact Source is added.

Add the Helm Chart

As explained earlier, you have two options when entering in the Helm chart info. The Chart Specifications or the Values YAML. For this guide, we will use the Chart Specifications.

  1. Expand the Helm section of the service.
  2. Click Chart Specifications. The Helm Chart Specifications dialog appears.
  3. In Chart Name, enter the name of the chart in that repo. For this guide, we use nginx.
  4. In Chart Version, enter the chart version to use. This is found in the Chart.yamlversion label. For this guide, we will use 1.0.1. If you leave this field empty Harness gets the latest chart.
  5. In Chart URL, enter the URL for the server hosting the Helm chart. For this guide, we will use https://charts.bitnami.com/bitnami. If you want to use a different method of installing a chart, see the table below this procedure.

    When you are finished, the dialog will look like this:

  6. Click SUBMIT. The chart specification is added to the service.

There are 5 different ways a chart can be installed. The following table describes how to fill out the Helm Chart Specifications dialog for each way. N/A is used to indicate that you enter nothing.

Installation Method

Example

Field Values

Chart reference and repo URL.

This is the most common method.

helm install --repo https://example.com/charts/ nginx

Chart URL: https://example.com/charts/

Chart Name: nginx

Chart reference.

helm install stable/nginx

Chart URL: N/A

Chart Name: stable/nginx

Path to a packaged chart.

In this method, the chart file is located on the same pod as the Harness delegate.

You can add a Delegate Profile that copies the chart from a repo to the pod. For more information, see Delegate Profiles.

helm install ./nginx-1.2.3.tgz

Chart URL: N/A

Chart Name: dir_path_to_delegate/nginx

Path to an unpacked chart directory.

In this method, the chart file is located on the same pod as the Harness delegate.

You can add a Delegate Profile that copies the chart from a repo to the pod. For more information, see Delegate Profiles.

helm install ./nginx

Chart URL: N/A

Chart Name: dir_path_to_delegate/nginx

Absolute URL.

helm install https://example.com/charts/nginx-1.2.3.tgz

Chart URL: N/A

Chart Name: https://example.com/charts/nginx-1.2.3.tgz

For Helm, that's it. You don't have to do any further configuration to the service. Harness will use the chart you specified to configure the Kubernetes cluster.

File-based repo triggers are a powerful feature of Harness that lets you set a Webhook on your repo to trigger a Harness Workflow or Pipeline when a Push event occurs in the repo. For more information, see File-based Repo Triggers.

Now you can define the deployment environment and workflow for the deployment.

Before we do that, let's look at the Helm Value YAML section. You can enter the YAML for your values.yaml file. The values.yaml file contains the default values for a chart. You will typically have this file in the repo for your chart, but you can add it in the Harness service instead.

The Helm Value YAML dialog has the following placeholders that save you from having to enter in some variables:

  • ${NAMESPACE} - Replaced with the Kubernetes namespace, such as default. You will specify the namespace of the cluster when adding the Harness environment service infrastructure later, and the placeholder will be replaced with that namespace.
  • ${DOCKER_IMAGE_NAME} - Replaced with the Docker image name.
  • ${DOCKER_IMAGE_TAG} - Replaced with the Docker image tag.

For information about how values from different source are compiled at runtime, see Helm Values Priority.

Add Helm Chart from a Private Repo

To add a Helm chart from a private repo, you will need to authenticate with that repo via Harness and obtain the chart. This is configured using a Harness Delegate Profile containing a script that installs Helm and adds the repo using your authentication credentials. For more information about Delegate Profiles, see Delegate Installation and Management.

You use the Delegate Profile when you install the Delegate into your environment (in our example, the Kubernetes cluster). Lastly, you enter the name of the chart in your Service. At runtime, the Harness Delegate will obtain the Helm chart from the private repo.

Adding a Helm Chart from a private repo involves the following steps:

  • Create Harness secrets to use in the Delegate Profile.
  • Create Delegate Profile and add Helm private repo script.
  • Install and run Delegate with Delegate Profile applied.
  • Add the private repo chart to your Harness Service.

To add a Helm Chart from a private repo:

  1. Create Harness encrypted text secrets to use in the Delegate Profile. In this example, we will create two secrets, one for the repo Username and one for the repo Password.
    1. Hover over Continuous Security, and click Secrets Management. The Secrets Management page appears.
    2. In Secrets Management, click Encrypted Text.
    3. Click Add Encrypted Text. The Add Encrypted Text dialog appears.
    4. In Display Name, enter a name for the encrypted text, in this case, repoUsername. This is the name you will use to reference the text in your Delegate Profile, like this: ${secrets.getValue("repoUsername")}.
    5. In Value, enter the username for the private repo account.
    6. Click SUBMIT.
    7. Repeat these steps to add an encrypted text secret for the password for the private repo account, such as repoPassword. You will reference this password secret in the Delegate profile like this: ${secrets.getValue("repoPassword")}.
  2. Create Delegate Profile and add Helm private repo script.
    1. In Harness, click Setup.
    2. Click Harness Installations.
    3. Click Manage Delegate Profiles, and then click Add Delegate Profile.




      The Manage Delegate Profile dialog appears.
    4. In Name, enter a name for the profile, such as Helm-Private-Repo.
    5. In Startup Script, enter the script you want to run when the profile is applied, such as when the Delegate is started. Here is an example of a profile that installs and runs Helm and Tiller and connects to a private repo using the Harness secrets you created:
    # Other installation method
    # curl https://raw.githubusercontent.com/helm/helm/master/scripts/get > get_helm.sh
    # chmod 700 get_helm.sh
    # ./get_helm.sh

    curl https://raw.githubusercontent.com/helm/helm/master/scripts/get | bash

    # Initialize Helm and install Tiller
    helm init --service-account tiller

    helm repo add --username ${secrets.getValue("repoUsername")} --password ${secrets.getValue("repoPassword")} repo_name https://charts.bitnami.com/bitnami

    helm repo update
    Your script will vary depending on the repo you use and how you want to install Helm. For information on installing Helm, see Installing Helm from Helm. Here is information for connecting to repos on Artifactory, Helm Chart Repositories, and AWS S3, How to create a Helm chart repository using Amazon S3.
    1. Click SUBMIT.
The script example above obtains the latest version of Helm. Ensure that the version of Tiller running in the Kubernetes cluster is compatible with the version of Helm you run.
  1. Install and run Delegate with Delegate Profile applied. For more information about installing and running the Delegate, see Delegate Installation and Management.
    1. Click Download Delegate and select the delegate type, in this case, Kubernetes YAML. The Delegate Setup dialog appears.
    2. In Name, enter a name for the Delegate.
    3. In Profile, select the Delegate Profile for the private repo.

    4. Click SUBMIT. The Delegate Kubernetes YAML tar file is download. Install it on the host you are using for the Delegate.
If you use multiple Delegates, apply the Delegate Profile to each Delegate to ensure that Harness can obtain the chart from the private repo at runtime. During deployment, Harness will check to see in the Delegate has Helm installed. If the Delegate does not have Helm installed, it will fail the deployment.
  1. Add the private repo chart to your Harness Service.
    1. In your Harness Service, in the Helm Chart Specifications dialog, simply enter the name of the chart in Display Name. You can enter the chart version in Chart Version, but you do not need to enter the private repo URL in Chart URL. You can leave Chart URL empty.

    2. Click SUBMIT. You have set up Harness to use a private repo Helm chart.

Define Deployment Environment

After you have set up the Harness service for your Helm deployment, you can add a Harness environment that lists the Cloud Provider and Kubernetes cluster where Harness will deploy your Docker image. You will define the service infrastructure for Harness to use when deploying the Docker image.

The following procedure creates a Harness environment where you can deploy your Docker image. For this guide, we will be using a Kubernetes cluster on Google Cloud Platform for the deployment environment.

To add a Harness environment, do the following:

Create a New Harness Environment

  1. In your Harness application, click Environments. The Environments page appears.
  2. Click Add Environment. The Environment dialog appears.
  3. In Name, enter a name that describes the deployment environment, for example, GCP-K8S-Helm.
  4. In Environment Type, select Non-Production.
  5. Click SUBMIT. The new Environment page appears.

Add a Service Infrastructure

You define the Kubernetes cluster to use for deployment as a Service Infrastructure. For this guide, we will use the GCP Cloud Provider you added and the Kubernetes cluster with Helm installed.

To add a service infrastructure, do the following:

  1. In the Harness environment, click Add Service Infrastructure. The Service Infrastructure dialog appears.
  2. In Service, select the Harness Service you created earlier for your Docker image.
  3. In Deployment Type, select Helm.
  4. In Cloud Provider, select the GCP Cloud Provider you added earlier. The dialog will look something like this:
  5. Click Next. The Configuration section is automatically populated with the clusters located using the Cloud Provider connection. If the Cluster Name drop-down is taking a long time to load, check the connectivity to the Cloud Provider of the host running the Harness delegate.
  6. In Cluster Name, select the cluster you created for this deployment.
  7. In Namespace, select the namespace of the Kubernetes cluster. Typically, this is default. If you used the Helm Value YAML dialog placeholder ${NAMESPACE}, the namespace you select here will replace that placeholder.

    When you are finished, the dialog will look something like this:
  8. Click SUBMIT. The new service infrastructure is added to the Harness environment.

That is all you have to do to set up the deployment environment in Harness. You have the Docker image service and deployment environment set up. Now you can create the deployment workflow in Harness.

Override Service Helm Values

In the Harness environment, you can override Helm values specified in the Harness service. This can ensure that the Helm values used in the environment are consistent even if multiple services with different Helm values use that environment.

To override a service Helm value, do the following:

  1. In the Harness environment, click Add Configuration Overrides. The Service Configuration Override dialog appears.
  2. In Service, click the name of the service you want to override, or click All Services. The dialog changes to provide Override Type options.
  3. Click Value YAML. A text area appears where you can paste an entire value.yaml file or simply override one or more values, such as a Docker image name. Click SUBMIT when you are finished.

For more information, see Helm Values Priority.

Workflows with Helm

Once you have added the Harness service and environment you can add a Harness workflow. Workflows manage how your Harness service is deployed, verified, and rolled back, among other important phases. There are many different types of workflows, from Basic to Canary, and Blue/Green.

Helm deployments use a Basic workflow that simply puts the Docker image on the Kubernetes cluster built using the Helm chart.

For more information about workflows, see Add a Workflow, Canary Deployment, and Blue/Green Deployment.

To add the workflow, do the following:

Create the Workflow

  1. In your Harness application, click Workflows.
  2. On the Workflows page, click Add Workflow. The Workflow dialog appears.
  3. In Name, give your workflow a name that describes its purpose, such as NGINX-K8s-Helm.
  4. In Workflow Type, select Basic Deployment. Helm deployments are Basic deployments, unlike Canary or Blue/Green. They are single phase deployments where each deployment is installed or upgraded. You can create multiple Helm deployments and add them to a Harness pipeline. For more information, see Add a Pipeline.
  5. In Environment, select the environment you created earlier in this guide.
  6. In Service, select the service you added earlier in this guide.
  7. In Service Infrastructure, select the service infrastructure you added to your environment earlier in this guide. When you are done, the dialog will look something like this:
  8. Click SUBMIT. The workflow is displayed.

Harness creates all the steps needed to deploy the service to the service infrastructure.

You can see that one workflow step, Helm Deploy, is incomplete.

Steps are marked incomplete if they need additional input from you. To complete this step, see the following section.

Helm Deploy Step

In your workflow, click the Helm Deploy step. The Helm Deploy dialog appears.

The Helm Deploy step has a few options you can use to manage how Helm is used in the deployment.

Helm Release Name

In the Helm Deploy Step, you need to add a Helm release name.

From the Helm docs:

A Release is an instance of a chart running in a Kubernetes cluster. One chart can often be installed many times into the same cluster. And each time it is installed, a new release is created. Consider a MySQL chart. If you want two databases running in your cluster, you can install that chart twice. Each one will have its own release, which will in turn have its own release name.

Since Helm requires release names to be unique across the cluster, Harness generates a unique identifier with the variable ${infra.helm.shortId}. You can use this variable as a prefix or suffix for the release name. We recommend the following release name:

${service.name}-${env.name}-${infra.helm.shortId}

If the service name is NGINX and the environment name is GCP-K8s-Helm, then the release name will be nginx-gcp-k8s-helm-rjogwmu, where rjogwmu is generated by ${infra.helm.shortId}.

Command Flags

You can enter Helm command flags in this field. Flags will be used in helm command at runtime:

helm <command> [commandFlags]

For example, if you entered --tls in Command Flags, then at runtime it would applied to the helm commands:

helm get manifest --tls nginx-gcp-k8s-helm-rjogwmu

Deployment Steady State Timeout

For Deployment steady state timeout, you can leave the default of 10 minutes. It is unlikely that deployment would ever take longer than 10 minutes.

Git Connector

For Git connector, you can specify values or a full values.yaml file in Git repo and Harness will fetch the values during runtime. For information on how Harness merges values from different source for the values.yaml, see Helm Values Priority.

File-based repo triggers are a powerful feature of Harness that lets you set a Webhook on your repo to trigger a Harness Workflow or Pipeline when a Push event occurs in the repo. For more information, see File-based Repo Triggers.

To use a Git connector, you need to add a Git repo as a Harness Source Repo provider. For more information, see Add Source Repo Providers.

To use a Git connector in the Helm Deploy step, do the following:

  1. In Git connector, select the Git repo you added as a Source Repo.
  2. Select either Use specific commit ID and enter in the commit ID, or select Use latest commit from branch and enter in the branch name.
  3. In File path, enter the path to the values.yaml file in the repo, including the repo name, like helm/values.yaml.

Here's an example of what the Git connector might look like:

Completed Helm Deploy Step

When you are done, the typical Helm Deploy dialog will look something like this:

Only the Release Name is required.

Click SUBMIT and your workflow is complete. You can look or modify the default rollback steps and other deployment strategies in the workflow (for more information, see Add a Workflow), but for this guide, the workflow is complete and you can now deploy it. See the next section for deployment steps.

Helm Deployments

The following procedure deploys the workflow you created in this guide.

Before deploying the workflow, ensure all Harness delegates that can reach the resources used in the workflow are running. In Harness, click Setup, and then click Harness Installations.

To deploy your workflow, do the following:

  1. In your workflow, click Deploy.




    The Start New Deployment dialog appears.
  2. In Notes, enter information about the deployment that others should know. Harness records all the important details, and maintains the records of each deployment, but you might need to share some information about your deployment.
  3. Click SUBMIT. The Deployments page appears, and displays the deployment in real time.

The deployment was successful! Now let's look further at the Helm deployment.

Click Phase 1. You will the details of the phase, including the workflow entities, listed.

Click Phase 1 to expand it and see Deploy Containers. Expand Deploy Containers and click the Helm Deploy step you set up in the workflow. The details for the step are displayed, along with the command output:

Viewing Deployment in the Log

Let's look through the deployment log and see how your Docker image was deployed to your cluster using Helm.

First, we check to see if the Helm chart repo has already been added and, if not, add it from https://charts.bitnami.com/bitnami.

INFO   2018-10-09 16:59:51    Adding helm repository https://charts.bitnami.com/bitnami
INFO 2018-10-09 16:59:51 Checking if the repository has already been added
INFO 2018-10-09 16:59:51 Repository not found
INFO 2018-10-09 16:59:51 Adding repository https://charts.bitnami.com/bitnami with name examplefordoc-nginx
INFO 2018-10-09 16:59:51 Successfully added repository https://charts.bitnami.com/bitnami with name examplefordoc-nginx

Next, we look to see if a release with the same release name exists:

INFO   2018-10-09 16:59:51    Installing
INFO 2018-10-09 16:59:51 List all existing deployed releases for release name: nginx-gcp-k8s-helm-rjogwmu
INFO 2018-10-09 16:59:51 Release: "nginx-gcp-k8s-helm-rjogwmu" not found

This is the release name generated from our Helm Deploy step name of ${service.name}-${env.name}-${infra.helm.shortId}.

Since this is the first deployment, an existing release with that name is not found, and a new release occurs.

INFO   2018-10-09 16:59:52    No previous deployment found for release. Installing chart
INFO 2018-10-09 16:59:54 NAME: nginx-gcp-k8s-helm-rjogwmu
INFO 2018-10-09 16:59:54 LAST DEPLOYED: Tue Oct 9 23:59:53 2018
INFO 2018-10-09 16:59:54 NAMESPACE: default
INFO 2018-10-09 16:59:54 STATUS: DEPLOYED

You can see the Kubernetes events in the logs as the cluster is created.

INFO   2018-10-09 16:59:54    NAME                                       READY  STATUS             RESTARTS  AGE
INFO 2018-10-09 16:59:54 nginx-gcp-k8s-helm-rjogw-565bc8495f-w5tzs 0/1 ContainerCreating 0 0s
INFO 2018-10-09 16:59:54
INFO 2018-10-09 16:59:54 Deployed Controllers [2]:
INFO 2018-10-09 16:59:54 Kind:Deployment, Name:nginx-gcp-k8s-helm-rjogw (desired: 1)
INFO 2018-10-09 16:59:54 Kind:ReplicaSet, Name:nginx-gcp-k8s-helm-rjogw-565bc8495f (desired: 1)
INFO 2018-10-09 16:59:54
INFO 2018-10-09 16:59:54 **** Kubernetes Controller Events ****
...
INFO 2018-10-09 16:59:54 Desired number of pods reached [1/1]
...
INFO 2018-10-09 16:59:54 Pods are updated with image [docker.io/bitnami/nginx:1.14.0-debian-9] [1/1]
INFO 2018-10-09 16:59:54 Waiting for pods to be running [0/1]
INFO 2018-10-09 17:00:05
...
INFO 2018-10-09 17:00:05 **** Kubernetes Pod Events ****
INFO 2018-10-09 17:00:05 Pod: nginx-gcp-k8s-helm-rjogw-565bc8495f-w5tzs
INFO 2018-10-09 17:00:05 - pulling image "docker.io/bitnami/nginx:1.14.0-debian-9"
INFO 2018-10-09 17:00:05 - Successfully pulled image "docker.io/bitnami/nginx:1.14.0-debian-9"
INFO 2018-10-09 17:00:05 - Created container
INFO 2018-10-09 17:00:05 - Started container
INFO 2018-10-09 17:00:05
INFO 2018-10-09 17:00:05 Pods are running [1/1]
INFO 2018-10-09 17:00:05 Waiting for pods to reach steady state [0/1]

Lastly, but most importantly, confirm the steady state for the pods to ensure deployment was successful.

INFO   2018-10-09 17:00:20    Pods have reached steady state [1/1]
INFO 2018-10-09 17:00:20 Pod [nginx-gcp-k8s-helm-rjogw-565bc8495f-w5tzs] is running. Host IP: 10.128.0.24. Pod IP: 10.60.1.8
INFO 2018-10-09 17:00:20 Command finished with status SUCCESS

Helm Rollbacks

Harness adds a revision number for each deployment. If a new deployment fails, Harness rolls back to the previous deployment revision number. You can see the Revision number in the log of a deployment. Here is sample from a log after an upgrade:

INFO   2018-10-11 14:43:09    Installing
INFO 2018-10-11 14:43:09 List all existing deployed releases for release name: nginx-gcp-k8s-helm-rjogwmu
INFO 2018-10-11 14:43:09 REVISION UPDATED STATUS CHART DESCRIPTION
INFO 2018-10-11 14:43:09 1 Tue Oct 9 23:59:53 2018 SUPERSEDED nginx-1.0.1 Install complete
INFO 2018-10-11 14:43:09 2 Thu Oct 11 18:27:35 2018 SUPERSEDED nginx-1.0.1 Upgrade complete
INFO 2018-10-11 14:43:09 3 Thu Oct 11 21:30:24 2018 DEPLOYED nginx-1.0.1 Upgrade complete

The REVISION column lists the revision number. Note the revision number 3 as the last successful version deployed. We will now fail a deployment that would be revision 4 and you will see Harness roll back to number 3.

Here is an example where a failure has been initiated using an erroneous HTTP call (Response Code 500) to demonstrate the rollback behavior:

To experiment with rollbacks, you can simply add a step to your workflow that will fail.

The failed deployment section is red, but the Rollback Phase 1 step is green, indicating that rollback has been successful. If we expand Rollback Phase 1, we can see the rollback information in the Helm Rollback step details:

The failed version is Release Old Version 4 and the Release rollback Version is revision 3, the last successful version. The rollback version now becomes the new version, Release New Version 5.

Let's look at the log of the rollback to see Harness rolling back successfully.

INFO   2018-10-11 14:43:22    Rolling back
INFO 2018-10-11 14:43:23 Rollback was a success! Happy Helming!
INFO 2018-10-11 14:43:23
INFO 2018-10-11 14:43:24 Deployed Controllers [2]:
INFO 2018-10-11 14:43:24 Kind:Deployment, Name:nginx-gcp-k8s-helm-rjogw (desired: 1)
INFO 2018-10-11 14:43:24 Kind:ReplicaSet, Name:nginx-gcp-k8s-helm-rjogw-565bc8495f (desired: 1)
INFO 2018-10-11 14:43:26 Desired number of pods reached [1/1]
INFO 2018-10-11 14:43:26 Pods are updated with image [docker.io/bitnami/nginx:1.14.0-debian-9] [1/1]
INFO 2018-10-11 14:43:26 Pods are running [1/1]
INFO 2018-10-11 14:43:26 Pods have reached steady state [1/1]
INFO 2018-10-11 14:43:28 Pod [nginx-gcp-k8s-helm-rjogw-565bc8495f-w5tzs] is running. Host IP: 10.128.0.24. Pod IP: 10.60.1.8
INFO 2018-10-11 14:43:28 Command finished with status SUCCESS

When the next deployment is successful, you can see a record of the rollback release:

INFO   2018-10-11 15:38:16    Installing
INFO 2018-10-11 15:38:16 List all existing deployed releases for release name: nginx-gcp-k8s-helm-rjogwmu
INFO 2018-10-11 15:38:16 REVISION UPDATED STATUS CHART DESCRIPTION
INFO 2018-10-11 15:38:16 1 Tue Oct 9 23:59:53 2018 SUPERSEDED nginx-1.0.1 Install complete
INFO 2018-10-11 15:38:16 2 Thu Oct 11 18:27:35 2018 SUPERSEDED nginx-1.0.1 Upgrade complete
INFO 2018-10-11 15:38:16 3 Thu Oct 11 21:30:24 2018 SUPERSEDED nginx-1.0.1 Upgrade complete
INFO 2018-10-11 15:38:16 4 Thu Oct 11 21:43:12 2018 SUPERSEDED nginx-1.0.1 Upgrade complete
INFO 2018-10-11 15:38:16 5 Thu Oct 11 21:43:22 2018 DEPLOYED nginx-1.0.1 Rollback to 3

The Description for the last release, Revision 5, states that it was a Rollback to 3.

Upgrading Deployments

When you run a Helm deployment a second time it will upgrade your Kubernetes cluster. The upgrade is performed in a rolling fashion that does not cause downtime. Essentially, and gracefully, the upgrade deletes old pods and adds new pods with new version of artifacts.

Let's look at the deployment log from an upgrade to see how Harness handles it.

First, Harness looks for all existing Helm chart releases with the same name and upgrades them:

INFO   2018-10-11 14:30:22    Installing
INFO 2018-10-11 14:30:22 List all existing deployed releases for release name: nginx-gcp-k8s-helm-rjogwmu
INFO 2018-10-11 14:30:24 REVISION UPDATED STATUS CHART DESCRIPTION
INFO 2018-10-11 14:30:24 1 Tue Oct 9 23:59:53 2018 SUPERSEDED nginx-1.0.1 Install complete
INFO 2018-10-11 14:30:24 2 Thu Oct 11 18:27:35 2018 DEPLOYED nginx-1.0.1 Upgrade complete
INFO 2018-10-11 14:30:24
INFO 2018-10-11 14:30:24 Previous release exists for chart. Upgrading chart
INFO 2018-10-11 14:30:25 Release "nginx-gcp-k8s-helm-rjogwmu" has been upgraded. Happy Helming!
INFO 2018-10-11 14:30:25 LAST DEPLOYED: Thu Oct 11 21:30:24 2018
INFO 2018-10-11 14:30:25 NAMESPACE: default
INFO 2018-10-11 14:30:25 STATUS: DEPLOYED

Then it upgrades the cluster pods with the new Docker image of NGINX:

INFO   2018-10-11 14:30:25    Deployed Controllers [2]:
INFO 2018-10-11 14:30:25 Kind:Deployment, Name:nginx-gcp-k8s-helm-rjogw (desired: 1)
INFO 2018-10-11 14:30:25 Kind:ReplicaSet, Name:nginx-gcp-k8s-helm-rjogw-565bc8495f (desired: 1)
INFO 2018-10-11 14:30:25 Desired number of pods reached [1/1]
INFO 2018-10-11 14:30:25 Pods are updated with image [docker.io/bitnami/nginx:1.14.0-debian-9] [1/1]
INFO 2018-10-11 14:30:25 Pods are running [1/1]
INFO 2018-10-11 14:30:25 Pods have reached steady state [1/1]
INFO 2018-10-11 14:30:27 Pod [nginx-gcp-k8s-helm-rjogw-565bc8495f-w5tzs] is running. Host IP: 10.128.0.24. Pod IP: 10.60.1.8
INFO 2018-10-11 14:30:27 Command finished with status SUCCESS

Helm Values Priority

Typically, the values.yaml applied to your Kubernetes cluster is a single file from the Helm chart repo, but it can also be a merger of values from different sources. This enables a key in values.yaml to get updated with different, and likely more current, values.

You can simply use a values.yaml in the Helm chart repo. There is no requirement to use multiple sources.

Values for the values.yaml can be specified in the following sources:

  • Harness service.
  • Harness environment.
  • Harness workflow via a Git connector.
  • The values.yaml file in the Helm chart repo.

Harness will merge key values from all of these sources into a values.yaml to apply to Kubernetes.

In case of conflicts, values will be overridden. Here is how values are overridden, from least to highest priority:

  1. Chart repo values.yaml has the least priority. This is the values.yaml in the chart repo you specify in the Helm Chart Specifications in the Harness service.
  2. Harness service values override chart repo values. These are values specified the Helm Values YAML in the Harness service.
  3. Harness environment values override Harness service values. These are the values you specify in the Add Configuration Overrides in a Harness environment.
  4. Harness workflow values added via a Git connector have the highest priority.

Do it All in YAML

All of the Harness configuration steps in this guide can be performed using code instead of the Harness user interface. You can view or edit the YAML for any Harness configuration by clicking the </> button on any page.

When you click the button, the Harness code editor appears:

You can edit YAML and click Save to change the configuration.

For example, here is the YAML for the workflow we set up in this guide.

harnessApiVersion: '1.0'
type: BASIC
envName: GCP-K8S-Helm
failureStrategies:
- executionScope: WORKFLOW
failureTypes:
- APPLICATION_ERROR
repairActionCode: ROLLBACK_WORKFLOW
retryCount: 0
notificationRules:
- conditions:
- FAILED
executionScope: WORKFLOW
notificationGroupAsExpression: false
notificationGroups:
- Account Administrator
phases:
- type: HELM
computeProviderName: Harness Sample K8s Cloud Provider
daemonSet: false
infraMappingName: Kubernetes Cluster_ Harness Sample K8s Cloud Provider_DIRECT_Kubernetes_default
name: Phase 1
phaseSteps:
- type: HELM_DEPLOY
name: Deploy Containers
steps:
- type: HELM_DEPLOY
name: Helm Deploy
properties:
steadyStateTimeout: 10
gitFileConfig: null
helmReleaseNamePrefix: ${service.name}-${env.name}-${infra.helm.shortId}
stepsInParallel: false
- type: VERIFY_SERVICE
name: Verify Service
stepsInParallel: false
- type: WRAP_UP
name: Wrap Up
stepsInParallel: false
provisionNodes: false
serviceName: Docker-Helm
statefulSet: false
rollbackPhases:
- type: HELM
computeProviderName: Harness Sample K8s Cloud Provider
daemonSet: false
infraMappingName: Kubernetes Cluster_ Harness Sample K8s Cloud Provider_DIRECT_Kubernetes_default
name: Rollback Phase 1
phaseNameForRollback: Phase 1
phaseSteps:
- type: HELM_DEPLOY
name: Deploy Containers
phaseStepNameForRollback: Deploy Containers
statusForRollback: SUCCESS
steps:
- type: HELM_ROLLBACK
name: Helm Rollback
stepsInParallel: false
- type: VERIFY_SERVICE
name: Verify Service
phaseStepNameForRollback: Deploy Containers
statusForRollback: SUCCESS
stepsInParallel: false
- type: WRAP_UP
name: Wrap Up
stepsInParallel: false
provisionNodes: false
serviceName: Docker-Helm
statefulSet: false
templatized: false

Helm and the Kubernetes Delegate

You can set the Helm version for the Harness Kubernetes Delegate to use.

The Harness Kubernetes Delegate is configured and run using a YAML file that you download from Harness, as described in Delegate Installation and Management. You can edit the YAML file and set the desired Helm version to use with the HELM_DESIRED_VERSION parameter.

Here is a sample of the Kubernetes delegate YAML file with the HELM_DESIRED_VERSION parameter in bold:

apiVersion: v1
kind: Namespace
metadata:
name: harness-delegate
...
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
...
spec:
...
template:
metadata:
labels:
harness.io/app: harness-delegate
harness.io/account: yvcrcl
harness.io/name: harness-delegate
spec:
containers:
- image: harness/delegate:latest
imagePullPolicy: Always
name: harness-delegate-instance
resources:
limits:
cpu: "1"
memory: "6Gi"
env:
...
- name: HELM_DESIRED_VERSION
value: ""

restartPolicy: Always

You can find the Helm versions to use on Github.

Next Steps

  • Pipeline and Triggers - Once you have a successful workflow, you can experiment with a Harness pipeline, which as a collection of one or more workflows, and Harness triggers, which enable you to execute a workflow or pipeline deployment using different criteria, such as when a new artifact is added to an artifact source. For more information, see Add a Pipeline and Add a Trigger.
  • Continuous Verification - Add verification steps using Splunk, SumoLogic, Elk, AppDynamics, New Relic, DynaTrace, and others to your workflow. For more information, see Continuous Verification.


How did we do?