Azure Deployments

Updated 4 days ago by Michael Cretzman

This guide will walk you through deploying a Docker image in an Azure Container Registry (ACR) repo to an Azure Kubernetes Service (AKS) cluster. This deployment scenario is very popular and a walkthrough of all the steps involved will help you set up this scenario in Harness for your own microservices and apps.

For a vendor-agnostic Harness Docker to Kubernetes deployment, see our Kubernetes Deployments doc.

Azure deployment in Harness Manager

The same deployment in Kubernetes Dashboard

Deployment Summary

For a general overview of how Harness works, see Harness Architecture and Application Checklist.

The following list describes the major steps we will cover in this guide:

  1. Install the Harness Kubernetes Delegate in an AKS Kubernetes cluster.
  2. Add Cloud Providers. We will create two Harness Cloud Providers:
    1. Kubernetes Cloud Provider - This is a connection to your AKS Kubernetes cluster using the Harness Delegate installed in that cluster.
    2. Azure Cloud Provider - This is a connection to your Azure account to access ACR. For other artifact repositories, a Harness Artifact Server connection is used. For Azure, Harness uses a Cloud Provider connection.
    Why two connections to Azure? When you create an AKS cluster, Azure also creates a service principal to support cluster operability with other Azure resources. You can use this auto-generated service principal for authentication with an ACR registry. If you can use this method, then only the Kubernetes Cloud Provider is needed.
    In this guide, we create separate connections for AKS and ACR because, in some instances, you might not be able to assign the required role to the auto-generated AKS service principal granting it access to ACR. For more information, see Authenticate with Azure Container Registry from Azure Kubernetes Service from Azure.
  3. Create the Harness Application for your Azure CD pipeline. The Harness Application represents a group of microservices, their deployment pipelines, and all the building blocks for those pipelines. Harness represents your microservice using a logical group of one or more entities: Services, Environments, Workflows, Pipelines, Triggers, and Infrastructure Provisioners. Applications organize all of the entities and configurations in Harness CI/CD.
  4. Create the Harness Service using the Kubernetes type.
    1. Set up your Kubernetes manifests and any config variables and files.
    2. Set the ImagePullSecrets setting to true. This will enable Kubernetes in AKS to pull the Docker image from ACR.
  5. Create the Harness Environment containing the Service Infrastructure definition of your AKS cluster, and any overrides.
  6. Create the Kubernetes deployment Harness Workflow.
  7. Deploy the Workflow to AKS. The deployment will pull the Docker image from ACR at runtime.
  8. Advanced options not covered in this guide:
    1. Create a Harness Pipeline for your deployment, including Workflows and Approval steps. For more information, see Pipelines.
    2. Create a Harness Trigger to automatically deploy your Workflows or Pipeline according to your criteria. For more information, see Triggers.
    3. Create Harness Infrastructure Provisioners for your deployment environments. For more information, see Infrastructure Provisioners.

What Are We Going to Do?

This guide walks you through deploying a Docker image from Azure ACR to Azure AKS using Harness. Basically, the Harness deployment does the following:

  • Docker Image - Pull Docker image from Azure ACR.
  • Kubernetes Cluster - Deploy the Docker image to a Kubernetes cluster in Azure AKS in a Kubernetes Rolling Deployment.

What Are We Not Going to Do?

This is a brief guide that covers the basics of deploying ACR artifacts to AKS. It does not cover the following:

Before You Begin

The following are required:

  • ACR repository - An Azure account with a ACR repository you can connect to Harness.
  • AKS Kubernetes cluster - An AKS Kubernetes cluster running in your Azure environment.

We will walk you through the process of setting up Harness with connections to ACR and AKS.

Permissions and Roles

This section discusses the permissions and roles needed for the Harness connections to Azure. The setup steps for the Harness Delegate and Cloud Providers that use these roles are provided in their respective sections.

The Azure permissions and roles required for each of the Harness connections to Azure are as follows:

  • Harness Kubernetes Delegate - The Harness Kubernetes Delegate is installed in the AKS cluster where you plan to deploy. You simply need to log into your AKS cluster and install it. No additional role is required.
    • The Harness Kubernetes Delegate install file will create a pod in the cluster and make an outbound connection to the Harness Manager. No Azure permissions are required.
    • The minimum Delegate resource requirements in the AKS cluster are 8GB RAM and 6GB Disk Space. Your AKS cluster will need enough resources to run the Delegate and your app. For the example in this guide, we created a cluster with 4 cores and 16GB of total memory.
    • For information about Harness Delegates, see Delegate Installation and Management, Delegate Server Requirements, and Delegate Connection Requirements.
  • Harness Kubernetes Cluster Cloud Provider - You will use the credentials of the Harness Kubernetes Delegate you installed in AKS for the Kubernetes Cluster Cloud Provider. No Azure permissions are required.
If you choose to use the Harness Azure Cloud Provider to connect to AKS, then you must assign the AKS Owner role to an Azure App Registration. The Client ID (Application ID), Tenant ID (also called the Directory ID), and Key for that Azure App Registration is then used to set up the Harness Azure Cloud Provider.
  • Harness Azure Cloud Provider - The Azure Cloud Provider connects to the ACR container. The Azure Cloud Provider requires the following App Registration information: Client ID (Application ID), Tenant ID (also called the Directory ID), and a Key. The Azure App you use for this connection must have the Reader role on the ACR container you want to use.
Why two connections to Azure? When you create an AKS cluster, Azure also creates a service principal to support cluster operability with other Azure resources. You can use this auto-generated service principal for authentication with an ACR registry. If you can use this method, then only the Kubernetes Cloud Provider is needed.
In this guide, we create separate connections for AKS and ACR because, in some instances, you might not be able to assign the required role to the auto-generated AKS service principal granting it access to ACR. For more information, see Authenticate with Azure Container Registry from Azure Kubernetes Service from Azure.

To register an App and assign it a role in an ACR container, the App must exist within the scope of an AzurerResource group. The resource group includes the resources that you want to manage as a group, and is typically set up by your Azure account administrator. It is a common Azure management scope. For more information, see Deploy resources with Resource Manager templates and Azure portal from Azure.

To set up the ACR container with the Azure App and Reader role, do the following:

  1. In the ACR container, click Access control (IAM).
  2. Click Add a role assignment.
  3. In Role, enter Reader.
  4. In Assign access to, select Azure AD user, group, or service principal.
  5. In Select, enter the name of the Azure App that you will use to connect Harness. In this example, the App is name doc-app.
  6. Click the name of the App. When you are finished, the settings will look something like this:
  7. Click Save.

When you add the Azure Cloud Provider later in this guide, you will use the Client ID (Application ID), Tenant ID (also called the Directory ID), and a Key from that App to set up the Azure Cloud Provider. Harness will use the Reader role attached to the Azure App to connect to your ACR container.

Delegate Setup

The simplest method for connecting Harness to AKS is to install the Harness Kubernetes Delegate in your AKS cluster and then set up the Harness Kubernetes Cluster Cloud Provider to use the same credentials as the Delegate.

Here is a quick summary of the steps for installing the Kubernetes Delegate in your AKS cluster:

  1. Download the Harness Kubernetes Delegate.
    1. In Harness, click Setup.
    2. Click Harness Delegates.
    3. Click Download Delegate, and then click Kubernetes.
      The Delegate Setup dialog appears.
    4. In Name, enter a name for your Delegate, for example harness-sample-k8s-delegate. You will use this name later when selecting this Delegate in the Kubernetes Cluster Cloud Provider dialog.
    5. Click SUBMIT. The Kubernetes file is downloaded to your computer.
    6. In a Terminal, navigate to the folder where the Kubernetes file was downloaded and extract the YAML file:

      $ tar -zxvf harness-delegate-kubernetes.tar.gz
    7. Navigate into the folder that was extracted:

      $ cd harness-delegate-kubernetes

      The Kubernetes Delegate YAML file is ready to be installed in your AKS cluster.
  2. Install the Harness Kubernetes Delegate in the AKS Kubernetes cluster. The easiest way to install the Delegate is to use the Azure CLI locally.
    1. In the same Terminal you used to extract the Kubernetes Delegate YAML file, log into your Azure Subscription:

      $ az login -u <username> -p <password>
    2. Connect to the AKS cluster where you plan to deploy:

      $ az aks install-cli

      $ az aks get-credentials --resource-group <resource_group> --name myHarnessCluster
    3. Install the Harness Kubernetes Delegate:

      kubectl apply -f harness-delegate.yaml

      You will see the following output:

      namespace/harness-delegate created
      clusterrolebinding.rbac.authorization.k8s.io/harness-delegate-cluster-admin created
      secret/harness-sample-k8s-delegate-proxy created
      statefulset.apps/harness-sample-k8s-delegate-vkjrqz created
    4. Verify that that Delegate pod is running:

      kubectl get pods -n harness-delegate

      The output shows the status of the pod:

      harness-sample-k8s-delegate-vkjrqz-0 1/1 Running 0 57s
  3. View the Delegate in Harness. In Harness, view the Harness Delegates page. Once the Delegate is installed, the Delegate is listed in the Installations page in a few moments.

Cloud Providers Setup

In this section, we will add a Harness Kubernetes Cluster Cloud Provider and a Azure Cloud Provider to your account.

Kubernetes Cluster Cloud Provider

When a Kubernetes cluster is created, you specify the authentication methods for the cluster. For a Kubernetes Cluster Cloud Provider in Harness, you can use these methods to enable Harness to connect to the cluster as a Cloud Provider, or you can simply use the Harness Kubernetes Delegate installed in the cluster.

For this guide, we will set up the Kubernetes Cluster Cloud Provider using the Delegate we installed earlier as the authentication method.

Add Kubernetes Cluster Cloud Provider

To set up the Kubernetes Cluster Cloud Provider, do the following:

  1. In Harness, click Setup.
  2. In Setup, click Cloud Providers.
  3. Click Add Cloud Provider. The Cloud Provider dialog appears.
  4. In Type, select Kubernetes Cluster. The Cloud Provider dialog changes to display the Kubernetes Cluster settings.
  5. In Display Name, enter a name for the Cloud Provider, such as Harness Sample K8s Cloud Provider. You will use this name when setting up the Service Infrastructure settings in Harness later.
  6. Select Inherit Cluster Details from selected Delegate.
  7. In Delegate Name, select the name of the Delegate you installed in your cluster earlier. When you are finished, the dialog will look something like this:
  8. Click TEST to verify the settings, and then click SUBMIT. The Kubernetes Cloud Provider is added.

Azure Cloud Provider

The Azure Cloud Provider connects to the ACR container. The Azure Cloud Provider requires the following App Registration information: Client ID (Application ID), Tenant ID (also called the Directory ID), and a Key. The Azure App you use for this connection must have the Reader role on the ACR container you want to use.

Add Azure Cloud Provider

To set up the Azure Cloud Provider, do the following:

  1. In Harness, click Setup.
  2. In Setup, click Cloud Providers.
  3. Click Add Cloud Provider. The Cloud Provider dialog appears.
  4. In Type, select Microsoft Azure. The Cloud Provider dialog changes to display the Microsoft Azure settings.
  5. In Display Name, enter a name for the Cloud Provider, such as azure. You will use this name when setting up the Artifact Source settings in Harness later.
  6. In Client ID, enter the Client/Application ID for the Azure app registration you are using. It is found in the Azure Active Directory App registrations. For more information, see Quickstart: Register an app with the Azure Active Directory v1.0 endpoint from Microsoft.

    To access resources in your Azure subscription, you must assign the Azure App registration using this Client ID to a role in that subscription. Later, when you set up an Artifact Source in a Harness Source, you will select a subscription. If the Azure App registration using this Client ID is not assigned a role in a subscription, no subscriptions will be available. For more information, see Assign the application to a role from Microsoft.
  7. In Tenant ID, enter the Tenant ID of the Azure Active Directory in which you created your application. This is also called the Directory ID. For more information, see Get tenant ID from Azure.
  8. In Key, enter the authentication key for your application. This is found in Azure Active DirectoryApp Registrations. Doubleclick the App name. Click Settings, and then click Keys. You cannot view existing key values, but you can create a new key. For more information, see Get application ID and authentication key from Azure. Azure has previewed a new App Registrations blade that displays keys in the Certificates & secrets tab, under Client secrets.
  9. When you are finished the Azure Cloud Provider dialog will look something like this:
  10. Click TEST to verify the settings, and then click SUBMIT. The Azure Cloud Provider is added.

You're all connected! Now you can start using Harness to set up CD.

Application Setup

The following procedure creates a Harness Application for a AKS Kubernetes deployment using an ACR repository.

An Application in Harness represents a logical group of one or more entities, including Services, Environments, Workflows, Pipelines, Triggers, and Infrastructure Provisioners. Applications organize all of the entities and configurations in Harness CI/CD. For more information, see Application Checklist.

To create the Harness Application, do the following:

  1. In Harness, click Setup.
  2. Click Add Application. The Application dialog appears.
  3. Give your Application a name that describes your microservice or app. For the purposes of this guide, we use the name ACR-to-AKS.
  4. Click SUBMIT. The new Application is added.
  5. Click the Application name to open the Application. The Application entities are displayed.

Harness Service

There are different types of Harness Services for different deployment platforms. The Kubernetes type includes Kubernetes-specific settings.

To add the Kubernetes Service, do the following:

  1. In your new Application, click Services. The Services page appears.
  2. In the Services page, click Add Service. The Service dialog appears.
  3. In Name, enter a name for your Service, such as Todolist-ACR.
  4. In Description, enter a description for your service.
  5. In Deployment Type, select Kubernetes.
  6. Click the Enable Kubernetes V2 checkbox. This setting configures the Service with the latest Harness Kubernetes Service settings.
  7. Click SUBMIT. The new Service is displayed.

Next, we will walk through how to set up the Kubernetes manifest file and use the Service features.

Add ACR Artifact Source

An Artifact Source in a Service is the microservice or application artifact you want to deploy.

For this Azure deployment, the Artifact Source uses the Azure Cloud Provider you set up for your Harness account to connect to ACR (as described in Azure Cloud Provider), and selects a Todo List sample app Docker image as the artifact.

To add an Artifact Source to this Service, do the following:

  1. In the Service, click Add Artifact Source, and select Azure Container Registry. The Artifact Source dialog appears.
  2. Configure the following fields and click SUBMIT.
  • Cloud Provider - Select the Azure Cloud Provider we set up earlier.
  • Subscription - Select the Subscription set up in your ACR container registry. To locate the Subscription in ACR, click Overview, and see Subscription.
  • Azure Registry Name - Select the registry you want to use.
  • Repository Name - Select the repository containing the Docker image you want to use.

When you are finished, the Artifact Source dialog will look something like this:

You can add multiple Artifact Sources to a Service and view the build history for each one by clicking Artifact History.

Add Manifests

The Manifests section of Service contains the configuration files that describe the desired state of your application in terms of Kubernetes object descriptions.

What Can I Add in Manifests?

You can add any Kubernetes configurations files, formatted in YAML, such as object descriptions, in one or more files.

As you can see, you can use Go templating and Harness built-in variables in combination in your Manifests files. For information about the features of Manifests, see Add Manifests in the Harness Kubernetes Deployments doc.

For this guide, we will use the default manifests, with one important change for ACR: we will edit the Kubernetes imagePullSecret setting.

Modify ImagePullSecret

To pull the image from the private ACR registry, Kubernetes needs credentials. These credentials are in the Azure Cloud Provider you set up, and used to configure the Artifact Source for the Service.

The imagePullSecrets field in the deployment.yaml configuration file specifies that Kubernetes should get the credentials from a Secret named {{.Values.name}}-dockercfg:

{{- if .Values.createImagePullSecret}}
apiVersion: v1
kind: Secret
metadata:
name: {{.Values.name}}-dockercfg
annotations:
harness.io/skip-versioning: true
data:
.dockercfg: {{.Values.dockercfg}}
type: kubernetes.io/dockercfg
{{- end}}
If you are new to Kubernetes Secrets, see Pull an Image from a Private Registry from Kubernetes.

Since we use Go templating in Manifests, the name and dockercfg values reference the parameters in the values.yaml file in Manifests:

name: harness-example
replicas: 1

image: ${artifact.metadata.image}
dockercfg: ${artifact.source.dockerconfig}

At runtime, these values will be replaced with the Docker image name and source. This tells Kubernetes where to obtain the Docker image for deployment using the Artifact Source you set up in Service. Most importantly, it passes the Secret needed to access the remote registry, ACR.

By default, the Secret setting is disabled:

createImagePullSecret: false

To enable the Secrets setting and tell Kubernetes to use the credentials in the Azure Cloud Provider connection to pull the Docker image from ACR, do the following:

  1. In Manifests, click values.yaml, and then click Edit.
  2. Change the createImagePullSecret label to true:

    createImagePullSecret: true
  3. Click Save.

That's it. In your AKS cluster at deployment runtime, Kubernetes will use the Azure Cloud Provider credentials to obtain the Docker image from ACR.

When you create an AKS cluster, Azure also creates a service principal to support cluster operability with other Azure resources. You can use this auto-generated service principal for authentication with an ACR registry. If you can use this method, then only the Kubernetes Cloud Provider is needed and the createImagePullSecret setting can be left as false. In this guide, we create separate connections for AKS and ACR because, in some instances, you might not be able to assign the required role to the auto-generated AKS service principal granting it access to ACR. For more information, see Authenticate with Azure Container Registry from Azure Kubernetes Service from Azure.

Now we can set up the deployment Environment to tell Harness where to deploy the Docker image.

Namespace Variable

Before we set up the deployment Environment, let's look at one more interesting setting, click values.yaml and locate the namespace setting:

namespace: ${infra.kubernetes.namespace}

Next, click the namespace.yaml file to see the variable referenced in values.yaml:

{{- if .Values.createNamespace}}
apiVersion: v1
kind: Namespace
metadata:
name: {{.Values.namespace}}
{{- end}}

The ${infra.kubernetes.namespace} variable is a Harness built-in variable and it references the Kubernetes cluster namespace value enter in the Harness Environment, which you will create later.

The ${infra.kubernetes.namespace} variable let's you enter any value in the Environment Namespace setting and, at runtime, the Kubernetes Namespace manifest uses that name to create a namespace.

Config Variables and Files

For the purpose of this guide, we don't use many of the other Service settings you can use. For information on the Config Variables and Files settings, see Configuration Variables and Files.

Harness Environments

Once you've added a Service to your Application, you can define Environments where your Service can be deployed. In an Environment, you specify the following:

  • A Harness Service, such as the Service with a Docker image artifact you configured.
  • A deployment type, such as Kubernetes.
  • A Cloud Provider, such as the Kubernetes Cluster Cloud Provider that you added in Cloud Providers Setup.

An Environment can be a Dev, QA, Production, or other environment. You can deploy one or many Services to each environment.

Create a New Harness Environment

The following procedure creates an Environment for the Service we set up earlier.

  1. In your Harness Application, click Environments. The Environments page appears.
  2. Click Add Environment. The Environment dialog appears.
  3. In Name, enter a name that describes the deployment environment, for example, AKS.
  4. In Environment Type, select Non-Production.
  5. Click SUBMIT. The new Environment page appears.

Add a Service Infrastructure

In Service Infrastructure, you define the AKS cluster to use for deployment.

To add a Service Infrastructure, do the following:

  1. In the Harness Environment, click Add Service Infrastructure. The Service Infrastructure dialog appears.
  2. In Service, select the Harness Service you created earlier.
  3. In Cloud Provider, select the Cloud Provider you added earlier. Ensure that you select the Kubernetes Cluster Cloud Provider you set up for the AKS connection and not the Azure Cloud Provider you set up for the ACR connection.
  4. Click Next.
  5. In Namespace, enter the name of the cluster namespace you want to use. As we noted in Namespace Variable, you can enter any value here and the Service will use it in its Namespace manifest to create the namespace at runtime.
  6. When you are done, the dialog will look something like this:
  7. Click SUBMIT. The new Service Infrastructure is added to the Harness Environment.

That is all you have to do to set up the deployment Environment in Harness.

Now that you have the Service and Environment set up. Now you can create the deployment Workflow in Harness.

Your Environment can overwrite Service Config Variables, Config Files, and other settings. This enables you to have a Service keep its settings but have them changed when used with this Environment. For example, you might have a single Service but an Environment for QA and an Environment for Production, and you want to overwrite the values.yaml setting in the Service depending on the Environment. We don't overwrite any Services variables in this guide. For more information, see Override Service Settings in the Kubernetes Deployments doc.

Workflows and Deployments

This section will walk you through creating a Kubernetes Workflow in Harness and what the Workflow steps deployment logs include.

In this guide, the Workflow performs a simple Rolling Deployment, which is a Kubernetes Rolling Update. For a detailed explanation, see Performing a Rolling Update from Kubernetes.

For information on other Workflow types, see Kubernetes Deployments.

To create a Rolling Workflow for Kubernetes, do the following:

  1. In your Application, click Workflows.
  2. Click Add Workflow. The Workflow dialog appears.
  3. In Name, enter a name for your Workflow, such as Todo List AKS.
  4. In Workflow Type, select Rolling Deployment.
  5. In Environment, select the Environment you create for your Kubernetes deployment.
  6. In Service Infrastructure, select the Service Infrastructure in the Environment to use. When you are done the dialog will look something like this:
  7. Click SUBMIT. The new Rolling Workflow is pre-configured.

As you can see, there is a Rollout Deployment step set up automatically. That's all the Workflow set up required. The Workflow is ready to deploy. When it is deployed, it will look like this:

You can see each section of the Rollout Deployment listed on the right. To see what that Rollout Deployment step does at runtime, let's look at the logs for each section.

Initialize

The Initialize step renders the Kubernetes object manifests in the correct order and validates them.

Initializing..


Manifests [Post template rendering] :

---

apiVersion: v1
kind: Namespace
metadata:
name: example
---
apiVersion: "v1"
kind: "Secret"
metadata:
annotations:
harness.io/skip-versioning: "true"
finalizers: []
labels: {}
name: "harness-example-dockercfg"
ownerReferences: []
data:
.dockercfg: "***"
stringData: {}
type: "kubernetes.io/dockercfg"
---
apiVersion: v1
kind: ConfigMap
metadata:
name: harness-example-config
data:
key: value
---
apiVersion: v1
kind: Service
metadata:
name: harness-example-svc
spec:
type: LoadBalancer
ports:
- port: 80
targetPort: 80
protocol: TCP
selector:
app: harness-example
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: harness-example-deployment
spec:
replicas: 1
selector:
matchLabels:
app: harness-example
template:
metadata:
labels:
app: harness-example
spec:
imagePullSecrets:
- name: harness-example-dockercfg
containers:
- name: harness-example
image: harnessexample.azurecr.io/todolist-sample:latest
envFrom:
- configMapRef:
name: harness-example-config


Validating manifests with Dry Run

kubectl --kubeconfig=config apply --filename=manifests-dry-run.yaml --dry-run
namespace/example configured (dry run)
secret/harness-example-dockercfg created (dry run)
configmap/harness-example-config created (dry run)
service/harness-example-svc configured (dry run)
deployment.apps/harness-example-deployment configured (dry run)

Done.

Note the imagePullSecrets settings. Harness used the Go templating in Service to fully form the correct YAML for Kubernetes.

Prepare

The Prepare section identifies the resources used and versions any for release history. Every Harness deployment creates a new release with an incrementally increasing number. Release history is stored in the Kubernetes cluster in a ConfigMap. This ConfigMap is essential for release tracking, versioning and rollback.

For more information, see Releases and Versioning.

Manifests processed. Found following resources: 

Kind Name Versioned
Namespace example false
Secret harness-example-dockercfg false
ConfigMap harness-example-config true
Service harness-example-svc false
Deployment harness-example-deployment false


Current release number is: 3

No previous successful release found.

Cleaning up older and failed releases

kubectl --kubeconfig=config delete ConfigMap/harness-example-config-2

configmap "harness-example-config-2" deleted

Managed Workload is: Deployment/harness-example-deployment

Versioning resources.

Done

Apply

The Apply section deploys the manifests from the Service Manifests section as one file.

kubectl --kubeconfig=config apply --filename=manifests.yaml --record

namespace/example unchanged
secret/harness-example-dockercfg created
configmap/harness-example-config-3 created
service/harness-example-svc unchanged
deployment.apps/harness-example-deployment configured

Done

Wait for Steady State

The Wait for Steady State section shows the containers and pods rolled out.

kubectl --kubeconfig=config get events --output=custom-columns=KIND:involvedObject.kind,NAME:.involvedObject.name,MESSAGE:.message,REASON:.reason --watch-only

kubectl --kubeconfig=config rollout status Deployment/harness-example-deployment --watch=true


Status : Waiting for deployment "harness-example-deployment" rollout to finish: 1 old replicas are pending termination...
Event : Pod harness-example-deployment-cfdb66bf4-qw5g9 pulling image "harnessexample.azurecr.io/todolist-sample:latest" Pulling
Event : Pod harness-example-deployment-cfdb66bf4-qw5g9 Successfully pulled image "harnessexample.azurecr.io/todolist-sample:latest" Pulled
Event : Pod harness-example-deployment-cfdb66bf4-qw5g9 Created container Created
Event : Pod harness-example-deployment-cfdb66bf4-qw5g9 Started container Started
Event : Deployment harness-example-deployment Scaled down replica set harness-example-deployment-6b8794c59 to 0 ScalingReplicaSet

Status : Waiting for deployment "harness-example-deployment" rollout to finish: 1 old replicas are pending termination...
Event : ReplicaSet harness-example-deployment-6b8794c59 Deleted pod: harness-example-deployment-6b8794c59-2z99v SuccessfulDelete

Status : Waiting for deployment "harness-example-deployment" rollout to finish: 1 old replicas are pending termination...

Status : deployment "harness-example-deployment" successfully rolled out

Done.

Wrap Up

The Wrap Up section shows the Rolling Update strategy used. Here is a sample:

...
Name: harness-example-deployment
Namespace: example
CreationTimestamp: Wed, 06 Mar 2019 20:16:30 +0000
Labels: <none>
Annotations: deployment.kubernetes.io/revision: 3
kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"apps/v1","kind":"Deployment","metadata":{"annotations":{"kubernetes.io/change-cause":"kubectl apply --kubeconfig=config --f...
kubernetes.io/change-cause: kubectl apply --kubeconfig=config --filename=manifests.yaml --record=true
Selector: app=harness-example
Replicas: 1 desired | 1 updated | 1 total | 1 available | 0 unavailable
StrategyType: RollingUpdate
MinReadySeconds: 0
RollingUpdateStrategy: 25% max unavailable, 25% max surge
...
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal ScalingReplicaSet 25m deployment-controller Scaled up replica set harness-example-deployment-86c6d74db8 to 1
Normal ScalingReplicaSet 14m deployment-controller Scaled up replica set harness-example-deployment-6b8794c59 to 1
Normal ScalingReplicaSet 4s deployment-controller Scaled down replica set harness-example-deployment-86c6d74db8 to 0
Normal ScalingReplicaSet 4s deployment-controller Scaled up replica set harness-example-deployment-cfdb66bf4 to 1
Normal ScalingReplicaSet 1s deployment-controller Scaled down replica set harness-example-deployment-6b8794c59 to 0

Done.

AKS Workflow Deployment

Now that the setup is complete, you can click Deploy in the Workflow to deploy the artifact to your cluster.

Next, select the artifact build version and click SUBMIT.

The Workflow is deployed.

To see the completed deployment, log into your Azure AKS cluster, click Insights, and then click Controllers.

If you are using a older AKS cluster, you might have to enable Insights.

The container details show the Docker image deployed:

You can also launch the Kubernetes dashboard to see the results:

To view the Kubernetes dashboard, in your AKS cluster, click Overview, click Kubernetes Dashboard, and then follow the CLI steps.

Troubleshooting

The following troubleshooting steps should help address common issues.

Failed to pull image

Kubernetes might fail to pull the Docker image set up in your Service:

Event  : Pod   harness-example-deployment-6b8794c59-2z99v   Error: ErrImagePull   Failed
Event : Pod harness-example-deployment-6b8794c59-2z99v Failed to pull image "harnessexample.azurecr.io/todolist-sample:latest": rpc error: code = Unknown desc = Error response from daemon: Get https://harnessexample.azurecr.io/v2/todolist-sample/manifests/latest: unauthorized: authentication required Failed

This is caused by the ImagePullSecret setting set to false in the values.yaml file in Service Manifests.

To fix this, set the ImagePullSecret setting set to true, as described in Modify ImagePullSecret:

createImagePullSecret: false

Next Steps


How did we do?