1 - Delegate, Providers, and Helm Setup
This content is for Harness FirstGen. Switch to NextGen.This topic describes how to set up the Harness Delegate, Connectors, and Cloud Providers for Helm, and provides some basic Helm setup information.
Harness includes both Kubernetes and Helm deployments, and you can Helm charts in both. Harness Kubernetes Deployments allow you to use your own Helm chart (remote or local), and Harness executes the Kubernetes API calls to build everything without Helm and Tiller needing to be installed in the target cluster. See Helm Charts.
Permissions for Connections and Providers
You connect Docker registries and Kubernetes clusters with Harness using the accounts you have with those providers. The following list covers the permissions required for the Docker, Kubernetes, Helm components.
- Docker:
- Read permissions for the Docker repository - The Docker registry you use as an Artifact Server in Harness must have Read permissions for the Docker repository.
- List images and tags, and pull images - The user account you use to connect the Docker registry must be able to perform the following operations in the registry: List images and tags, and pull images. If you have a Docker Hub account, you can access the NGINX Docker image we use in this guide.
- Kubernetes Cluster:
- For the Kubernetes Cluster or other Cloud Providers, please see Kubernetes Cluster and the Harness Add Cloud Providers doc.
- For a cluster or provider such as OpenShift, please see Kubernetes Cluster.
- Helm:
- URL for the Helm chart - For this guide, we use a publicly available Helm chart for NGINX from Bitnami, hosted on their Github account. You do not need a Github account.
- Helm and Tiller - Helm and Tiller must be installed and running in your Kubernetes cluster. Steps for setting this up are listed below.
For a list of all of the permissions and network requirements for connecting Harness to providers, see Delegate Connection Requirements.
Harness Kubernetes Delegate
The Harness Kubernetes Delegate runs in your target deployment cluster and executes all deployment steps, such the artifact collection and kubectl commands. The Delegate makes outbound connections to the Harness Manager only.
You can install and run the Harness Kubernetes Delegate in any Kubernetes environment, but the permissions needed for connecting Harness to that environment will be different for each environment.
The simplest method is to install the Harness Delegate in your Kubernetes cluster and then set up the Harness Cloud Provider to use the same credentials as the Delegate.For information on how to install the Delegate in a Kubernetes cluster, see Kubernetes Cluster. For an example installation of the Delegate in a Kubernetes cluster in a Cloud Platform, see Installation Example: Google Cloud Platform.
Here is a quick summary of the steps for installing the Harness Delegate in your Kubernetes cluster:
-
In Harness, click Setup, and then click Harness Delegates.
-
Click Download Delegate and then click Kubernetes YAML.
-
In the Delegate Setup dialog, enter a name for the Delegate, such as doc-example, select a Profile (the default is named Primary), and click Download. the YAML file is downloaded to your machine.
-
Install the Delegate in your cluster. You can copy the YAML file to your cluster any way you choose, but the following steps describe a common method.
- In a Terminal, connect to the Kubernetes cluster, and then use the same terminal to navigate to the folder where you downloaded the Harness Delegate YAML file. For example, cd ~/Downloads.
- Extract the YAML file:
tar -zxvf harness-delegate-kubernetes.tar.gz
. - Navigate to the harness-delegate folder that was created:
cd harness-delegate-kubernetes
4. Paste the following installation command into the Terminal and press enter:kubectl apply -f harness-delegate.yaml
You will see the following output (this Delegate is named doc-example):
namespace/harness-delegate created
clusterrolebinding.rbac.authorization.doc-example/harness-delegate-cluster-admin created
statefulset.apps/doc-example-lnfzrf createdRun this command to verify that the Delegate pod was created:
kubectl get pods -n harness-delegate
You will see output with the status Pending. The Pending status simply means that the cluster is still loading the pod.
Wait a few moments for the cluster to finish loading the pod and for the Delegate to connect to Harness Manager.
In Harness Manager, in the Harness Delegates page, the new Delegate will appear. You can refresh the page if you like.
Connections and Providers Setup
This section describes how to set up Docker and Kubernetes with Harness, and what the requirements are for using Helm.
Docker Artifact Server
You can add a Docker repository, such as Docker Hub, as an Artifact Server in Harness. Then, when you create a Harness service, you specify the Artifact Server and artifact(s) to use for deployment.
For this guide, we will be using a publicly available Docker image of NGINX, hosted on Docker Hub at hub.docker.com/_/nginx/. You will need to set up or use an existing Docker Hub account to use Docker Hub as a Harness Artifact Server. To set up a free account with Docker Hub, see Docker Hub.
To specify a Docker repository as an Artifact Server, do the following:
- In Harness, click Setup.
- Click Connectors. The Connectors page appears.
- Click Artifact Servers, and then click Add Artifact Server. The Artifact Servers dialog appears.
- In Type, click Docker Registry. The dialog changes for the Docker Registry account.
- In Docker Registry URL, enter the URL for the Docker Registry (for Docker Hub,
https://registry.hub.docker.com/v2/
). - Enter a username and password for the provider (for example, your Docker Hub account).
- Click SUBMIT. The artifact server is displayed.
Single GCR Docker Registry across Multiple Projects
In this document, we perform a simple set up using Docker Registry. Another common artifact server for Kubernetes deployments is GCR (Google Container Registry), also supported by Harness.
An important note about using GCR is that if your GCR and target GKE Kubernetes cluster are in different GCP projects, Kubernetes might not have permission to pull images from the GCR project. For information on using a single GCR Docker registry across projects, see Using single Docker repository with multiple GKE projects from Medium and the Granting users and other projects access to a registry section from Configuring access control by Google.
Kubernetes Cluster
For a Cloud Provider in Harness, you can specify a Kubernetes-supporting Cloud platform, such as Google Cloud Platform and OpenShift, or your own Kubernetes Cluster, and then define the deployment environment for Harness to use.
For this guide, we will use the Kubernetes Cluster Cloud Provider. If you use GCP, see Creating a Cluster from Google.
The specs for the Kubernetes cluster you create will depend on the microservices or apps you will deploy to it. To give you guidance on the node specs for the Kubernetes Cluster used in this guide, here is a node pool created for a Kubernetes cluster in GCP:
For Harness deployments, a Kubernetes cluster requires the following:
- Credentials for the Kubernetes cluster in order to add it as a Cloud Provider. If you set up GCP as a cloud provider using a GCP user account, that account should also be able to configure the Kubernetes cluster on the cloud provider.
- The kubectl command-line tool must be configured to communicate with your cluster.
- A kubeconfig file for the cluster. The kubeconfig file configures access to a cluster. It does not need to be named kubeconfig.
For more information, see Accessing Clusters and Configure Access to Multiple Clusters from Kubernetes.