5 - Kubernetes Canary Workflows

Updated 2 weeks ago by Michael Cretzman

This section will walk through creating a Canary Workflow in Harness and what the Workflow steps deployment logs include. This section includes the following:

By default, Harness Canary Workflows have two phases:

  1. Harness creates a Canary version of the Kubernetes Deployment object defined in your Service Manifests section. Once that Deployment is verified, the Workflow deletes it by default.
  2. Run the actual deployment using a Kubernetes RollingUpdate with the number of pods you specify in the Service Manifests files (for example, replicas: 3).
If you are new to Kubernetes RollingUpdate deployments, see Performing a Rolling Update from Kubernetes. That guide summaries RollingUpdate and provides an interactive, online tutorial.

To create a Canary Workflow for Kubernetes, do the following:

  1. In your Application, click Workflows.
  2. Click Add Workflow. The Workflow dialog appears.
  3. In Name, enter a name for your Workflow, such as NGINX-K8s.
  4. In Workflow Type, select Canary Deployment.
  5. In Environment, select the Environment you create for your Kubernetes deployment.
  6. Click SUBMIT. By default, the new Canary Workflow does not have any phases pre-configured.
    We will create the two Phases for the Canary Deployment next.

Canary Phase

The Canary Phase creates a Canary deployment using your Service Manifests files and the number of pods you specify in the Canary Deploymentstep in the Workflow.

To add the Canary Phase, do the following:

  1. In Deployment Phases, click Add Phase. The Workflow Phase dialog appears.
  1. In Service, select the Service where you set up your Kubernetes configuration files.
  2. In Service Infrastructure, select the Service Infrastructure where you want this Workflow Phase to deploy your Kubernetes objects. This is the Service Infrastructure with the Kubernetes cluster and namespace for this Phase's deployment.
  3. In Service Variable Overrides, you can add a variable to overwrite any variable in the Service you selected. Ensure that the variable names are identical. This is the same process described for overwriting variables in Override Service Settings.
  4. Click SUBMIT. The new Phase is created.

You'll notice the Phase is titled Canary automatically. Let's look at the default settings for this first Phase of a Canary Workflow.

Canary Deployment Step

Click the Canary Deployment step. The Canary Deployment step dialog appears.

In this step, you will define how many pods are deployed for a Canary test of the configuration files in your Service Manifests section.

  1. In Instance Unit Type, click COUNT or PERCENTAGE.
  2. In Instances, enter the number of pods to deploy.
    If you selected COUNT in Instance Unit Type, this is simply the number of pods.
    If you selected PERCENTAGE, enter a percentage of the pods defined in your Service Manifests files to deploy. For example, in you have replicas: 4 in a manifest in Service, and you enter 50 in Instances, then 2 pods are deployed in this Phase step.

Canary Deployment Step in Deployment

Let's look at an example where the Canary Deployment step is configured to deploy a COUNT of 2. Here is the step in the Harness Deploymentspage:

You can see Target Instance Count 2 in the Details section.

Below Details you can see the logs for the step.

Let's look at the PrepareApply, and Wait for Steady State sections of the step's deployment log, with comments added:

Prepare

Here is the log from the Prepare section:

Manifests processed. Found following resources: 

# API objects in manifest file

Kind Name Versioned
ConfigMap harness-example-config true
Deployment harness-example-deployment false

# each deployment is versioned, this is the second deployment

Current release number is: 2

Versioning resources.

# previous deployment

Previous Successful Release is 1

Cleaning up older and failed releases

# existing number if pods

Current replica count is 1

# Deployment workload executed

Canary Workload is: Deployment/harness-example-deployment-canary

# number specified in Canary Deployment step Instance field

Target replica count for Canary is 2

Done.

The name of the Deployment workload in the Service Manifests file is harness-example-deployment (the name variable is harness-example): .

apiVersion: apps/v1
kind: Deployment
metadata:
name: {{.Values.name}}-deployment

As you can see, Harness appends the name with -canaryharness-example-deployment-canary. This is to identify Canary Deployment step workloads in your cluster.

The next section is Apply.

Apply

Here you will see the manifests in the Server Manifests section applied using kubectl as a single file, manifests.yaml.

# kubectl command to apply manifests

kubectl --kubeconfig=config apply --filename=manifests.yaml --record

# ConfigMap object created

configmap/harness-example-config-2 created

# Deployment object created

deployment.apps/harness-example-deployment-canary created

Done.

Next, Harness logs the steady state of the pods.

Wait for Steady State

Harness displays the status of each pod deployed and confirms steady state.

# kubectl command for get events

kubectl --kubeconfig=config get events --output=custom-columns=KIND:involvedObject.kind,NAME:.involvedObject.name,MESSAGE:.message,REASON:.reason --watch-only

# kubectl command for status

kubectl --kubeconfig=config rollout status Deployment/harness-example-deployment-canary --watch=true

# status of each pod

Status : Waiting for deployment "harness-example-deployment-canary" rollout to finish: 0 of 2 updated replicas are available...
Event : Pod harness-example-deployment-canary-8675b5b8bf-98sf6 MountVolume.SetUp succeeded for volume "default-token-hwzdf" SuccessfulMountVolume
Event : Pod harness-example-deployment-canary-8675b5b8bf-rl2n8 pulling image "registry.hub.docker.com/library/nginx:stable-perl" Pulling
Event : Pod harness-example-deployment-canary-8675b5b8bf-98sf6 pulling image "registry.hub.docker.com/library/nginx:stable-perl" Pulling
Event : Pod harness-example-deployment-canary-8675b5b8bf-rl2n8 Successfully pulled image "registry.hub.docker.com/library/nginx:stable-perl" Pulled
Event : Pod harness-example-deployment-canary-8675b5b8bf-98sf6 Successfully pulled image "registry.hub.docker.com/library/nginx:stable-perl" Pulled
Event : Pod harness-example-deployment-canary-8675b5b8bf-rl2n8 Created container Created
Event : Pod harness-example-deployment-canary-8675b5b8bf-98sf6 Created container Created
Event : Pod harness-example-deployment-canary-8675b5b8bf-rl2n8 Started container Started
Event : Pod harness-example-deployment-canary-8675b5b8bf-98sf6 Started container Started

Status : Waiting for deployment "harness-example-deployment-canary" rollout to finish: 1 of 2 updated replicas are available...

# canary deployment step completed

Status : deployment "harness-example-deployment-canary" successfully rolled out

Done.
Wrap Up

The Wrap Up log is long and describes all of the container and pod information for the step, using the kubectl command:

kubectl --kubeconfig=config describe --filename=manifests.yaml

Canary Delete Step

Since the Canary Deployment step was successful, it is no longer needed. The Canary Delete step is used to clean up the workload deployed by the Canary Deployment step.

Harness does not roll back Canary deployments because your production is not affected during Canary. Canary catches issues before moving to production. Also, you might want to analyze the Canary deployment. The Canary Delete step is useful to perform cleanup when required.

In the Wrap Up section of the Workflow is the Canary Delete step.

This step deletes the specified resources. Be default, this is the workload deployed in the Canary Deployment step, represented with the variable, ${k8s.canaryWorkload}. You can add a namespace before the resource name, like this example/${k8s.canaryWorkload}.

You can also use the Canary Delete step in the last Workflow of a Pipeline to clean up any workloads you no longer need.

Phase 1 of the Canary Deployment Workflow is complete. Now the Workflow needs a Primary Phase to roll out the objects defined in the Service Manifests section.

The Canary Delete step discussed here is just the Delete step renamed to Canary Delete. You can add a Delete step in any Workflow and simply identify the resource to delete.

Canary Delete and Traffic Management

If you are using the Traffic Split step or doing Istio traffic shifting using the Apply step, move the Canary Delete step from Wrap Up section of the Canary phase to the Wrap Up section of the Primary phase.

Moving the Canary Delete step to the Wrap Up section of the Primary phase will prevent any traffic from being routed to deleted pods before traffic is routed to stable pods in the Primary phase.

For more information, see Traffic Management and Kubernetes Workflow Commands.

Primary Phase

The Primary Phase runs the actual deployment as a rolling update with the number of pods you specify in the Service Manifests files (for example, replicas: 3).

Similar to application-scaling, during a rolling update of a Deployment, the Kubernetes service will load-balance the traffic only to available pods (an instance that is available to the users of the application) during the update.

To add the Primary Phase, do the following:

  1. In your Workflow, in Deployment Phases, under Canary, click Add Phase.
    The Workflow Phase dialog appears.
  2. In Service, select the same Service you selected in Phase 1.
  3. In Service Infrastructure, select the same Service Infrastructure you selected in Phase 1.
  4. In Service Variable Overrides, you can add a variable to overwrite any variable in the Service you selected. Ensure that the variable names are identical. This is the same process described for overwriting variables in Override Service Settings.
  5. Click SUBMIT. The new Phase is created.

The Phase is named Primary automatically, and contains one step, Rollout Deployment.

Rollout Deployment performs a rolling update. Rolling updates allow an update of a Deployment to take place with zero downtime by incrementally updating pod instances with new ones. The new pods are scheduled on nodes with available resources. The rolling update Deployment uses the number of pods you specified in the Service Manifests (number of replicas).

Primary Step in Deployment

Let's look at an example where the Primary step deploys the Service Manifests objects. Here is the step in the Harness Deployments page:

Before we look at the logs, let's look at the Service Manifests files it's deploying.

Here is the values.yaml from our Service Manifests section:

name: harness-example
replicas: 1
image: ${artifact.metadata.image}

Here is the spec.yaml from our Service Manifests section:

apiVersion: v1
kind: ConfigMap
metadata:
name: {{.Values.name}}-config
data:
key: value
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{.Values.name}}-deployment
spec:
replicas: {{int .Values.replicas}}
selector:
matchLabels:
app: {{.Values.name}}
template:
metadata:
labels:
app: {{.Values.name}}
spec:
containers:
- name: {{.Values.name}}
image: {{.Values.image}}
envFrom:
- configMapRef:
name: {{.Values.name}}-config
ports:
- containerPort: 80

Let's look at the InitializePrepare, and Apply stages of the Rollout Deployment.

Initialize

In the Initialize section of the Rollout Deployment step, you can see the same object descriptions as the Service Manifests section:

Initializing..

Manifests [Post template rendering] :

# displays the manifests taken from the Service Manifests section

---
apiVersion: v1
kind: ConfigMap
metadata:
name: harness-example-config
data:
key: value
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: harness-example-deployment
spec:
replicas: 1
selector:
matchLabels:
app: harness-example
template:
metadata:
labels:
app: harness-example
spec:
containers:
- name: harness-example
image: registry.hub.docker.com/library/nginx:stable-perl
envFrom:
- configMapRef:
name: harness-example-config
ports:
- containerPort: 80

# Validates the YAML syntax of the manifest with a dry run

Validating manifests with Dry Run

kubectl --kubeconfig=config apply --filename=manifests-dry-run.yaml --dry-run
configmap/harness-example-config created (dry run)
deployment.apps/harness-example-deployment configured (dry run)

Done.

Now that Harness has ensured that manifests can be used, it will process the manifests.

Prepare

In the Prepare section, you can see that Harness versions the ConfigMap and Secret resources (for more information, see Versioning and Annotations).

Manifests processed. Found following resources: 

# determine if the resources are versioned

Kind Name Versioned
ConfigMap harness-example-config true
Deployment harness-example-deployment false

# indicates that these objects have been released before

Current release number is: 2

Previous Successful Release is 1

# removed unneeded releases

Cleaning up older and failed releases

# identifies new Deployment workload

Managed Workload is: Deployment/harness-example-deployment

# versions the new release

Versioning resources.

Done.

Now Harness can apply the manifests.

Apply

The Apply section shows the kubectl commands for applying your manifests.

# the Service Manifests section are compiled into one file and applied

kubectl --kubeconfig=config apply --filename=manifests.yaml --record

# the objects applied

configmap/harness-example-config-2 configured
deployment.apps/harness-example-deployment configured

Done.

Now that the manifests are applied, you can see the container and pod details described in Wrap Up.

Wrap Up

Wrap Up is long and uses a kubectl describe command to provide information on all containers and pods deployed:

kubectl --kubeconfig=config describe --filename=manifests.yaml

Here is a sample from the output that displays the Kubernetes RollingUpdate:

# Deployment name

Name: harness-example-deployment

# namespace from Deployment manifest

Namespace: default
CreationTimestamp: Wed, 13 Feb 2019 01:00:49 +0000
Labels: <none>
Annotations: deployment.kubernetes.io/revision: 2
kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"apps/v1","kind":"Deployment","metadata":{"annotations":{"kubernetes.io/change-cause":"kubectl apply --kubeconfig=config --f...
kubernetes.io/change-cause: kubectl apply --kubeconfig=config --filename=manifests.yaml --record=true

# Selector applied

Selector: app=harness-example,harness.io/track=stable

# number of replicas from the manifest

Replicas: 1 desired | 1 updated | 1 total | 1 available | 0 unavailable

# RollingUpdate strategy

StrategyType: RollingUpdate
MinReadySeconds: 0

# RollingUpdate progression

RollingUpdateStrategy: 25% max unavailable, 25% max surge

As you look through the description in Wrap Up you can see label added:

add label: harness.io/track-stable 

You can use the harness.io/track-stable selector for managing traffic to these pods, or for testing the pods. For more information, see Versioning and Annotations.

Canary Workflow Deployment

Now that the setup is complete, you can click Deploy in the Workflow to deploy the artifact to your cluster.

Next, select the artifact build version and click SUBMIT.

The Workflow is deployed.

Now that you have successfully deployed your artifact to your Kubernetes cluster pods using your Harness Application, look at the completed workload in the deployment environment of your Kubernetes cluster.

For example, here is the Deployment workload in Google Cloud Kubernetes Engine:

Or you can simply connect to your cluster in a terminal and see the pod(s) deployed:

john_doe@cloudshell:~ (project-15454)$ kubectl get pods
NAME READY STATUS RESTARTS AGE
harness-example-deployment-7df7559456-xdwg5 1/1 Running 0 9h


How did we do?