Harness Kubernetes v1 FAQ

Updated 2 months ago by Michael Cretzman

This document covers Kubernetes and Helm implementations using Harness Kubernetes Version 1 features. For Version 2 features (recommended), see Kubernetes Deployments Overview.

This FAQ answers all of the common questions about Harness integration with Kubernetes, how Harness deploys to Kubernetes clusters and pods, and how you can customize Harness to your desired Kubernetes orchestration.

Installations and Permissions

How do I Connect to a Kubernetes Cluster in Harness?

This section gives an overview of connection Harness to Kubernetes. For details on connecting to a Kubernetes cluster or Kubernetes on cloud providers, see Add Cloud Providers.

You add a Kubernetes cluster as a Cloud Provider in the Harness Account section:

  1. In Setup, under Account, click Cloud Providers.
  2. Click Add Cloud Provider. The Cloud Provider dialog appears.
  3. In Type, select Kubernetes Cluster. The Kubernetes options appear.
  4. Fill out the dialog and click SUBMIT. It will look something like this:

Can I Install a Harness Delegate in a Kubernetes Cluster?

You can install a Harness Kubernetes delegate inside a Kubernetes cluster. You can add a startup script to the delegate, called a Delegate Profile, that will be applied every time the delegate is created or restarted. For information on installing the Harness Kubernetes delegate and profiles, see the related steps in Delegate Installation.

Instructions for running the Harness Kubernetes delegate are also included in the README file that accompanies the Harness Kubernetes delegate download.

The Harness Kubernetes delegate is configured and run using a YAML file. It creates a delegate that has the cluster-admin role used for the Harness delegate in a Kubernetes cluster. For more information on this role, Default Roles and Role Bindings from Kubernetes. This Kubernetes delegate will have all the permissions it requires. It does not require the permissions of the traditional, external Harness delegate which requires login credentials or certificates for authentication.

You can edit the Harness Kubernetes delegate YAML file before using it to create the Harness Kubernetes delegate. For example, you can change proxy settings or the desired Helm version for the Harness Kubernetes delegate to use.

What are the Google Cloud Platform Permissions for Kubernetes and Harness?

The Google Cloud Platform (GCP) service account requires the Container Engine Admin role to get the Kubernetes master username and password. Harness also requires Storage Object Viewer permissions.

Kubernetes Components

What Kubernetes Entities does Harness Support?

Entity

Description

Controllers

Harness supports Kubernetes controller entities. See Controller from Kubernetes.

Labels

Harness uses Kubernetes labels to reference (and cross-reference) all Kubernetes components. Labels are used when Harness creates a Kubernetes controller and service. Labels make the service route traffic to the pods that the controller brings up. See Label from Kubernetes.

ConfigMaps

Harness creates ConfigMaps automatically using all the unencrypted Harness service variables and configuration files. See ConfigMap from Kubernetes.

Secret Maps

Stores all encrypted Harness service variables and configuration information. See Secret from Kubernetes.

Horizontal Pod Autoscaler

Harness lets you configure Horizontal Pod Autoscaler to automatically scales pod replicas based on targets. See Horizontal Pod Autoscaler from Kubernetes.

Services

Not to be confused with a Harness service (which is a micro-service or application you are deploying), a Kubernetes service describes how to access pods in a cluster both internally and externally. See Connecting Applications with Services from Kubernetes.

Ingress Rules

Manages external traffic coming into the Kubernetes cluster. See Ingress from Kubernetes.

Istio Route Rules

A type of Ingress controller that support traffic splitting for the different revisions that Harness deploys. See Isito from Kubernetes.

What are the Supported Kubernetes Controllers?

Kubernetes controllers include the container specification and create the pods to run the application. Controllers manage routine tasks to ensure the desired state of the cluster matches the observed state. Each controller is responsible for a particular resource.

The default controller type used by Harness is Deployment. To use a different controller type, you must define it in Harness.

Harness supports the following controllers:

Controller Type

Description

Kubernetes Doc

Deployment (default)

Provides declarative updates for Pods and ReplicaSets.

You describe a desired state in a Deployment object, and the Deployment controller changes the actual state to the desired state at a controlled rate. You can define Deployments to create new ReplicaSets, or to remove existing Deployments and adopt all their resources with new Deployments.

Deployments

ReplicationController

Ensures a specified number of pod replicas are running at any one time. Guarantees that a pod or a homogeneous set of pods is running and available.

ReplicationController

ReplicaSet

ReplicaSet is the next-generation Replication Controller and supports the new set-based selector requirements whereas a Replication Controller only supports equality-based selector requirements.

ReplicaSet

StatefulSet

Similar to a Deployment, a StatefulSet manages Pods that are based on an identical container spec. While StatefulSet pods are created from the same spec, they are not interchangeable. Each pod has a persistent identifier maintained across any rescheduling.

StatefulSets

DaemonSet

Ensures all (or some) nodes run a copy of a pod. As nodes are added to the cluster, pods are added to them. As nodes are removed from the cluster, pods are garbage collected. Deleting a DaemonSet cleans up the pods it created.

DaemonSet

Where are Controllers Set Up?

Controllers are set up in Harness services that use a Docker Image Artifact Type.

For information on setting up a service using a Docker Image Artifact Type, see Add a Docker Image Service.

For example, to configure a Deployment in a Docker Image Service, do the following:

  1. In the service, in Deployment Specification, click Kubernetes.
  2. Click Container Specification. The KUBERNETES - Container Definition dialog appears.
  3. Click Advanced Settings. The YAML code for the service appears. Note: kind: Deployment
    Please read the information provided in the Advanced Settings YAML comments. It provides details about required and optional variables.
  4. Modify the Deployment YAML and click SUBMIT. The Kubernetes container specification is displayed in the Service Overview. To edit it further, click Container Specification.

What Controller Types are Versioned?

The following controller types are versioned:

  • Deployment
  • ReplicaSet
  • ReplicationController

The following controller types are not versioned:

  • StatefulSet
    • Pods keep the same name when restarted.
    • Persistent volumes can be specified as templates.
  • DaemonSet
    • Does not use replicas field.
    • Always creates one pod in each node in the cluster.

How Does Harness Cache Non-Versioned Controller Types?

For non-versioned controller types, Harness caches the current YAML (if any) in case of rollback.

Cached entities are:

  • Controller - set up in Harness service.
  • ConfigMap - set up in Harness service.
  • Secret map - set up in Harness service.
  • Horizontal Pod Autoscaler - set up in Harness workflow Kubernetes Service Setup.

For non-versioned controller types, deployment happens in the workflow Setup Container step (Kubernetes Service Setup). There is no phased deployment.

Configure As Code

How Do I Add My YAML?

Harness services includes a Kubernetes Container Specification section that allows you to define multiple Deployment container specification details.

If you need to configure advanced settings, such as change the container type, and CPU, imagePullPolicy, etc, you can click the Advanced Settings in Kubernetes Container Specification and enter in your YAML.

What Harness Placeholders are Required in the YAML?

The service Kubernetes Container Specification allows for advanced YAML and customization, but there are some placeholders required by Harness. These placeholders are described in the YAML comments at the top of the Advanced Settings, and below.

When pasting your YAML into the Kubernetes Container Definition, ensure that you do not overwrite the placeholders. There are also optional placeholder you can take advantage of for naming Kubernetes components.
Required Placeholders

There is only one required placeholder:

${DOCKER_IMAGE_NAME} - Replaced with the Docker image name and tag.

If you are pasting your YAML into the Kubernetes Container Definition, replace your image:repository/tag or a partial image ID with ${DOCKER_IMAGE_NAME}:

spec:
containers:
- args: []
command:
- "..."
env: []
envFrom: []
image: "${DOCKER_IMAGE_NAME}"
Optional Placeholders

The following optional placeholder:

  • ${CONFIG_MAP_NAME} - Replaced with the ConfigMap name (same as controller name). ConfigMap contains all unencrypted service variables and all unencrypted config files, unless a custom config map is provided.
  • ${SECRET_MAP_NAME} - Replaced with the Secret name (same as controller name). Secret map contains all encrypted service variables and all encrypted config files.
  • ${CONTAINER_NAME} - Replaced with a container name based on the image name.
  • ${SECRET_NAME} - Replaced with the name of the generated imagePullSecret (credentials of the registry). If you are using a private Docker registry, you will want to set the ${SECRET_NAME}. See Using a Private Registry from Kubernetes.

What YAML Fields are Overwritten By Harness?

If you paste your own Kubernetes YAML in the Harness service Kubernetes Container Definition, the following labels are overwritten by Harness:

  • metadata.name
  • metadata.namespace
  • metadata.labels
    • The harness-* prefix is applied.
  • spec.selectors
    • The harness-* prefix is applied.
  • spec.replicas is how many pods should be created. Harness manages this setting in the Upgrade Containers workflow step during deployment based on your setting (for example, 33%):

Where does Harness Get Environment Variables?

Kubernetes environment variables (env: label) are taken from Harness service variables, ConfigMap and encrypted variables (secrets). Harness service variables and ConfigMap are configured in the service Configuration section.

Secrets are any encrypted variables and files in Harness. Secrets are configured in the Harness Continuous Security section, in Secrets Management, and then they can be used in Harness service Config Variables and Config Files.

Unencrypted variables and files go into Kubernetes ConfigMap. Encrypted variables and files go into Kubernetes Secret map.

If you look in Google Cloud Platform (GCP), you will see the Config Map and Secret in Kubernetes Engine > Configuration. Click a Config Map or Secret to see the YAML. For information on Kubernetes environment variables, see Define Environment Variables for a Container and Expose Pod Information to Containers Through Environment Variables from Kubernetes.

What Variables are Overwritten or Merged by Harness?

If you paste in your Kubernetes specification, Kubernetes environment variables (env: label) in you specification are merged by Harness as follows:

  • Harness overrides Harness service Config Variables values if the variable name you use is the same as a variable name configured in Harness.
  • Unencrypted environment variables are merged to Kubernetes ConfigMap.
  • Encrypted variables are merged in Kubernetes Secret map if the key is a legal name for an environment variable.
  • Container name based on expression in Harness Setup (${CONTAINER_NAME}), with .<revision> added.

Can I Override the Harness Service ConfigMap, Variables, and Files?

You can overwrite Harness service variables are the service and environment level.

Service-Level Overrides

If you have a very large Kubernetes ConfigMap, you can override the Harness ConfigMap by pasting your Kubernetes ConfigMap into the Harness service ConfigMap YAML section of the Harness service:

Harness will use that ConfigMap instead of merging service the Config Variables and Config Files entered.

Harness does not allow you paste YAML for Secret map. You have to use encrypted Config Variables or Config File.

Environment-Level Overrides

If you want to overwrite the Config Variables, Config Files, or ConfigMap YAML (and even the Helm Value YAML) for one or more Harness services, you can do so in a Harness environment’s Service Configuration Overrides section.

To overwrite the Harness service(s) configurations in an Harness environment, do the following:

  1. In your Harness application, click the name of the application with the service(s) you want to overwrite.
  2. In Environments, click the environment where you want to overwrite a service. This will be an environment where you have already added the service(s) as part of the Service Infrastructure.
  3. In the Environment Overview page, click Add Configuration Overrides.
  4. In the Service Configuration Override dialog, select the Harness service with the ConfigMap or variables you want to override, and then click an override option, such as ConfigMap YAML, Variable Override, or File Override.
    To override all service that use this environment, in Service, select All Services.
  5. Enter the new ConfigMap YAML, variables, or file and click SUBMIT.

What is the Harness Variable Overwrite Precedence?

When managing variable overwrites, it is important to remember the precedence of variable overwriting in Harness.

Variable precedence goes, from least to greatest:

  1. Harness service ConfigMap YAML.
  2. Harness environment ConfigMap YAML targeted to all Harness services that use the environment (set in Harness environment Service Configuration Override).
  3. Harness environment ConfigMap YAML targeted to a specific Harness service.
  4. Service variable at the Harness service level.
  5. Service variable override in Harness environment targeted to all Harness services.
  6. Service variable override in Harness environment targeted to a specific Harness service.

Kubernetes ConfigMap and Secrets

How does Harness Create the Kubernetes ConfigMap?

  • When? Harness creates the Kubernetes ConfigMap automatically by default, containing all of a Harness service’s unencrypted text variables and config files.
  • How? The Kubernetes Controller is modified to include all ConfigMap entries as environment variables using envFrom configMapRef. For more information, see Configure all key-value pairs in a ConfigMap as Pod environment variables from Kubernetes.
  • How do I reference the ConfigMap? You can reference the ConfigMap in the Harness service Kubernetes Container Specification Advanced YAML with the placeholder ${CONFIG_MAP_NAME}.
  • How do I mount ConfigMap as a volume? Unencrypted files become multi-line entries in the Kubernetes ConfigMap and can be mounted as a volume if you need to use them as a file. Here is an example of an unencrypted config file as a volume:
     containers:
    - name: ${CONTAINER_NAME}
    image: ${DOCKER_IMAGE_NAME}
    volumeMounts:
    - name: config-vol
    mountPath: /your/path
    volumes:
    - name: config-vol
    configMap:
    name: ${CONFIG_MAP_NAME}
    items:
    - key: config-file-name
    path: config-file-path
  • What ConfigMap labels are overwritten by Harness?
    • metadata.name
    • metadata.namespace
  • How is the ConfigMap named? The ConfigMap has the same name as the controller.

How does Harness Create a Kubernetes Secret Map?

  • When? The Kubernetes Secret map is created automatically using encrypted Harness service Config Variables and encrypted Harness Config Files.
  • How? You create encrypted text and files in Harness in the Continuous Security section, in Secrets Management, and then they can be used in Harness service Config Variables and Config Files.
    Encrypted files become multi-line entries in the Secret map. For more information, see Secrets from Kubernetes. The Kubernetes Controller is modified to include all Secret map entries as environment variables using valueFrom secretKeyRef. For more information, see Using Secrets as Environment Variables from Kubernetes.
  • How do I mount Secret map as a volume? Secret maps can be mounted as a volume if you need to use them as a file. Here is an example of an encrypted config file as a volume:
    - name: ${CONTAINER_NAME}
    image: ${DOCKER_IMAGE_NAME}
    volumeMounts:
    - name: secret-vol
    mountPath: /your/path
    volumes:
    - name: secret-vol
    secret:
    secretName: ${SECRET_MAP_NAME}
    items:
    - key: config-file-name
    path: config-file-path
  • Can I use the custom YAML in Harness for Secret maps or Secrets? No. You must use the encrypted Config Variables and Config Files settings in a Harness service.
  • Are Secret Maps versioned and rolled back? Yes.
  • How do I reference the Secret map? You can reference the Secret map in the service Kubernetes Container Specification Advanced YAML with the placeholder ${SECRET_MAP_NAME}.
  • How is the Secret map named? The Secret map has the same name as the controller.

How does Harness Create a Kubernetes Secret?

  • What? Contains the credentials for a private registry.
  • How? Created automatically using credentials from the Harness connector for the artifact source, configured in your Harness Account settings.
  • How is a Secret named? The name is generated from the imagePullSecret when pulling from a private Docker registry.
  • How do I reference a Secret? You can reference the Secret in the service Kubernetes Container Specification Advanced YAML with the placeholder ${SECRET_NAME}.
  • Are Secrets versioned and rolled back? No.
  • How are Secrets named? Secrets are named using the private registry URL plus username.

Kubernetes Service

How do I Set Up a Kubernetes Service?

A Kubernetes service is different from a Harness service. A Harness service is your micro-service or application for deployment. A Kubernetes service enables applications running in a Kubernetes cluster to find and communicate with each other, and the outside world. For more information, see Connecting Applications with Services and Services from Kubernetes.

A Kubernetes service can be set up in Harness workflows that are configured with a environment that uses Kubernetes. The process is as follows:

  1. When you defined a Harness environment, you define a Service Infrastructure that uses Kubernetes.
  2. When you create a workflow, you select the environment for the workflow to deploy its service(s).
  3. Since the Service Infrastructure in the environment supports Kubernetes, the workflow will have a Kubernetes Service Setup step where you can configure the Kubernetes service scheduler, Horizontal Pod Autoscaler, and Ingress rules, and Istio.
  4. Click Kubernetes Service Setup. The Kubernetes Service Setup dialog appears.

How Do I use the Workflow Scheduler?

Harness enables you to schedule how your instances are deployed, including a resize strategy.

The Scheduler in the Kubernetes Service Setup is not the same as a Kubernetes scheduler.

How do I Set Up the Workflow Scheduler in Harness?

To set up the scheduler, do the following:

  1. In the Kubernetes Service Setup of a workflow, expand Scheduler.

Field

Description

Scheduler Name

The name you put here is used to name all the Kubernetes entities Harness creates. Harness generates a descriptive string using ${app.name}-${service.name}-${env.name}.

Release Name

A Release is an instance of a service running in a Kubernetes cluster. Release Name is a label used to identify a Release.

Release Name can be overridden if you want to deploy multiple parallel releases. One scenario is to create a per Pull Request service instance in the same Kubernetes Cluster.

Desired Instance Count

The number of instances to schedule.

Resize Strategy

Add new instances before downsizing old instance, or vice versa.

Steady State Wait Timeout

How long to wait for scheduled instances before timing out.

Can I use Horizontal Pod Autoscaler?

Harness lets you configure Kubernetes Horizontal Pod Autoscaler to automatically scales pod replicas based on metrics, such as CPU utilization, or a custom metric. For more information, see Horizontal Pod Autoscaler from Kubernetes.

Harness Horizontal Pod Autoscaler:

  • Supports custom YAML.
  • Evaluates expressions.
  • Is versioned and rolls back on failed deployments.
  • Harness overwrites the following related YAML labels and values:
    • metadata.name
    • metadata.namespace
  • Horizontal Pod Autoscaler is named the same as controller used by Harness for deployment.

How do I Set Up the Horizontal Pod Autoscaler in Harness?

Horizontal Pod Autoscaler is available in Harness workflows that are configured with a environment that uses Kubernetes. The process is as follows:

  1. In the Kubernetes Service Setup of a workflow, expand Horizontal Pod Autoscaler and click the Horizontal Pod Autoscaler checkbox.
  2. Define the Min and Max Autoscale Instances and the Target CPU Utilization %. If the default metrics are not adequate, you can use custom metrics.
  3. To use custom metrics, click Use YAML Format and paste in your custom metrics. For information on YAML for custom metrics, see Autoscaling Deployments with Custom Metrics from Google. The article describes how to use StackDriver Metrics Explorer. For information on using external metrics, see Autoscaling Deployments with External Metrics from Google.

How Do I Set Up the Kubernetes Services Types?

Kubernetes services types are different methods to connecting external traffic into your cluster. For more information, see Services from Kubernetes and Kubernetes NodePort vs LoadBalancer vs Ingress? When should I use what? from Medium.

Harness supports Kubernetes service types as follows:

  • Supports custom YAML, and evaluates expressions.
  • Not versioned.
  • Does not roll back on failed deployments.
  • Supported Kubernetes service types:
    • Cluster IP
    • Node Port
    • Load Balancer
    • External Name
  • Sends traffic to pods based on the label selectors.
  • You can use custom YAML to expose more than one port or to use annotations.
  • Kubernetes service labels overwritten by Harness:
    • metadata.name
    • metadata.namespace
  • Service is named based on expression ${app.name}-${service.name}-${env.name}.

To set up a Kubernetes service type, do the following:

  1. In the Kubernetes Service Setup of a workflow, expand Service Setup.
  2. Select the service type you want to configure and enter its settings.

Can I add Custom YAML for Kubernetes Service Types?

When the default configuration option in the Kubernetes Service Setup step are not sufficient, you can select YAML in Kubernetes Service Type, and enter in your YAML for the Kubernetes service. For example:

To find the YAML for a service in GCP, go to Kubernetes Engine, then Services, then click a service name, and then click the YAML tab.

An example for when you might enter a Kubernetes service YAML instead of the options in the Kubernetes Service Setup step is if you chose Load Balancer in Kubernetes Service Setup step but you want to specify more than just one Port (exposed port) and Target Port (port of the container that is mapped to Port). You might want to put both HTTP (80) and HTTPS (443) ports for these values. In that case, you would select YAML in Kubernetes Service Type, and enter in your YAML for these ports:

...
spec:
clusterIP: 10.15.240.143
externalTrafficPolicy: Cluster
ports:
- name: http
nodePort: 31244
port: 80
protocol: TCP
targetPort: 8080
- name: https
nodePort: 31573
port: 443
protocol: TCP
targetPort: 8081
...

Can I use Kubernetes Ingress in Harness?

Harness supports Kubernetes Ingress rules. A Kubernetes Ingress is a collection of rules that allow inbound connections to reach the cluster services. For more information, see Ingress from Kubernetes.

Harness supports Kubernetes Ingress as follows:

  • Supports custom YAML, and evaluates expressions.
  • Not versioned.
  • Does not roll back on failed deployments.
  • Uses placeholders for service name, service port, ConfigMap name, and Secret map name:
    • ${SERVICE_NAME} - required
    • ${SERVICE_PORT} - required
    • ${CONFIG_MAP_NAME}
    • ${SECRET_MAP_NAME}
  • Kubernetes Ingress labels overwritten by Harness:
    • metadata.name
    • metadata.namespace
  • Named the same as the Harness service.

How do I Set Up Kubernetes Ingress Rules?

To set up Ingress Rules for a Kubernetes service, do the following:

  1. In the Kubernetes Service Setup of a workflow, expand Ingress Rules.
  2. Click Enable Ingress rules.
  3. In Ingress YAML, enter the YAML for the Ingress object.

Can I use Istio Route Rules for Traffic Splitting?

Harness support Istio as a traffic splitter. When using an Istio ingress controller, traffic flow is decoupled from infrastructure scaling, letting you specify the routing rules to follow rather than which specific pods/VMs should receive traffic. For more information, see Traffic Management and DestinationWeight from Istio.

Harness supports Istio as follows:

  • No custom YAML.
  • Not versioned
  • Harness does handle rollback for traffic splitting.
  • Manages traffic shifting using the revision label on pods: harness-revision.
  • Named the same as the Harness service.

How do I Set Up Istio for Traffic Splitting?

To set up Istio for a Kubernetes service, do the following:

  1. In the Kubernetes Service Setup of a workflow, expand Ingress Rules. You might also want to click Enable Ingress rules and enter custom YAML for Kubernetes Ingress.
  2. In Traffic Splitter, select Istio.
  3. In Gateways, enter the names of the Istio Gateways to be configured with VirtualService. For more information, see Gateways from Istio.
  4. In Hosts, enter the hostnames VirtualService will listen on. If you do not enter a host name, Harness will look up a host based on the service name.

For more information, see Traffic Management and DestinationWeight from Istio.

What Happens in Kubernetes During a Harness Deployment?

The following procedure describes how a Harness workflow uses Kubernetes components, entities, and variables during a deployment to a Kubernetes infrastructure. Here is an example of a Harness workflow:

1. Setup Container

During the first step of the workflow, Harness sets up the Kubernetes container and does the following:

  • For versioned Kubernetes controllers, Harness creates the following in Kubernetes, adding a revision number:
    • ConfigMap.
    • Secret map.
    • Controller with zero replicas.
    • Horizontal Pod Autoscaler, disabled initially.
  • For non-versioned Kubernetes controllers, Harness creates or replaces the following, while caching the current version, in case of rollback:
    • ConfigMap.
    • Secret map.
    • Controller.
    • Horizontal Pod Autoscaler.
  • Harness creates or replaces the following entities, deleting them if they existed previously in the cluster but are no longer specified in the Harness workflow:
    • Image Pull Secret.
    • Service.
    • Ingress Rule.
    • Istio Route Rule.
    • Deletes and caches in case of rollback:

2. Deploy Containers

During the second step of the workflow, Harness deploys the Kubernetes container(s), changing the number of replicas, and does the following:

  • Harness sets controller replicas to the count or percentage of total specified in the Upgrade Containers dialog.
  • Harness reduces previous controller revision(s) by the same amount.
  • The Advanced section in the Upgrade Containers dialog allows upsize, downsize, and traffic shifting to be set independently.
  • If Istio traffic shifting was enabled in the Kubernetes Setup Step, Harness adjusts Istio Route Rule to split traffic in the same proportion as the number of pods for each revision (default), or in the amount specified in the Advanced section in the Upgrade Containers dialog.
  • If Desired Instances is set to deploy 100%, Harness creates Horizontal Pod Autoscaler.

Rollback Steps: 1. Deploy Containers

If Harness needs to rollback and restore the Kubernetes setup to its previous working version, the first step is to rollback the Kubernetes containers. Harness rolls back the Kubernetes containers and does the following:

  • Deletes Horizontal Pod Autoscaler.
  • Sets previous controller revision number(s) back to the values it had before deploy step.
  • Returns the number of controller replicas to the count that existed before the Upgrade Containers step.
  • If Istio traffic shifting is enabled, Harness adjusts Istio Route Rule to split traffic in the same proportion as it was before the Upgrade Containers step.

Rollback Steps: 2. Setup Container

Once Harness has rolled back the Kubernetes containers, it rolls back the remaining Kubernetes infrastructure setup:

  • Reverts DaemonSet or StatefulSet to previous YAML along with ConfigMap, Secret map, and Horizontal Pod Autoscaler, if specified.
  • Enables previous Horizontal Pod Autoscaler.

Why is Deploy Containers Missing From My Workflow?

If you change the Controller Type in the Harness service Container Specification to a type other than Deployment (default), ReplicaSet, or ReplicationController, and then use that service in a Workflow, the Deploy Containers step (and Upgrade Container substep) will not be added to the workflow.

The controller types that will not use the Deploy Containers workflow step are StatefulSet and DaemonSet.

This is particularly important if you use a workflow as a template and try to use services with different Kubernetes controller types with the workflow template.

For information on the controller types Harness supports, see What are the Supported Kubernetes Controllers?.

For example, the default controller type in a Harness service Kubernetes Container Specification is kind: "Deployment". The following Container Specification uses kind: "StatefulSet".

If you create a workflow that uses this Harness service, the Deploy Containers step does not appear. Only the Kubernetes Service Setup step is used:

Where can I see the Kubernetes Deployment in GCP?

Once Harness has deployed your application, you can see the Kubernetes entities deployed in the GCP console Kubernetes Engine.

Here is where you can find the Kubernetes entities Harness deployed:

  • Clusters - Clusters and resource usage.
  • Workloads - Controllers, pod details, logs, and events.
  • Services - Services and Ingress endpoints.
  • Configuration - Config Maps and Secrets.
  • Storage - Persistent volume claims.

To find your deployed Kubernetes entity in any section, click the in the search field and start typing the name of the entity.

Can I see the Kubernetes Deployment using kubectl?

To view entities that aren’t displayed in GCP (such as a Horizontal Pod Autoscaler), or for other cloud providers, use the kubectl command line interface. For more information, see Overview of kubectl and kubectl Cheat Sheet from Kubernetes.

For example, to see a specific HorizontalPodAutoscaler object in your cluster, run:

kubectl get hpa [HPA_NAME]

To see the HorizontalPodAutoscaler configuration:

kubectl get hpa [HPA_NAME] -o yaml

To see a detailed description of a specific HorizontalPodAutoscaler object in the cluster:

kubectl describe hpa [HPA_NAME]


How did we do?