Kubernetes Connected On-Prem Setup

Updated 2 months ago by Michael Cretzman

In addition to the Harness SaaS offering, you can run the Harness Continuous Delivery-as-a-Service platform on-premise, in a local Kubernetes cluster.

For more information, see the Harness Blog article, Harness uses Kubernetes for On-Premises Deployment.

This topic contains the following sections:

Architecture

Here is the high-level architecture of Harness Connected On-Premise with Kubernetes:

The main components are:

  • Kubernetes cluster hosting the microservices used by Harness, such as the Harness Manager, MongoDB, Verification, and Machine Learning Engine.
  • Docker repo used to host the Harness microservice images pulled by the cluster.
  • Harness Ambassador hosted on a Virtual Machine (VM) that makes a secure outbound connection to the Harness Cloud to download binaries.

Prerequisites

This section lists the infrastructure requirements that must be in place before following the steps in Installation.

System Requirements

  1. Kubernetes cluster (v1.9 +) with a dedicated namespace: harness.
  • 28 cores, 44 GB RAM, 640 GB Disk

Micro-service

Pods

CPU

Memory (GB)

Manager

2

4

10

Verification

2

2

6

Machine Learning Engine

1

8

2

UI

2

0.5

0.5

MongoDB

3

12

24

Proxy

1

0.5

0.5

Ingress

2

0.5

0.5

Total

14

27.5

43.5

  • Storage volume (200 GB) for each Mongo pod. (600 GB total)
  • Storage volume (10 GB) for the Proxy pod. 
  1. Docker Repository (for Harness microservice images).
  2. Harness Ambassador:

Connectivity and Access Requirements

  • Load balancer to Kubernetes cluster.
  • Kubernetes cluster to internal repository to pull images.
  • Ambassador to Kubernetes cluster to run kubectl commands.
  • Ambassador to internal repository to push Harness images.
  • Ambassador to load balancer.
  • Ambassador to app.harness.io (port: 443).

Kubernetes Cluster Setup

The following resources must be created in the Kubernetes cluster:

  • Namespace - Must be named harness.
  • Service account - The Kubernetes service account must have all permissions within the harness namespace.
  • Role - A role that provides access on the required API groups (apiGroups).
  • RoleBinding - The RoleBinding between the service account and the role that provides access on the required API groups.
  • StorageClass - The Harness installer will leverage the StorageClass configured in the cluster.
YAML for Kubernetes

The following YAML code can be executed by the cluster admin to create the necessary resources.

apiVersion: v1
kind: Namespace
metadata:
name: harness
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: harness-namespace-admin
namespace: harness
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: harness-namespace-admin-full-access
namespace: harness
rules:
- apiGroups: ["", "extensions", "apps", "autoscaling", "rbac.authorization.k8s.io", "roles.rbac.authorization.k8s.io"]
resources: ["*"]
verbs: ["*"]
- apiGroups: ["batch"]
resources:
- jobs
- cronjobs
verbs: ["*"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: harness-namespace-admin-view
namespace: harness
subjects:
- kind: ServiceAccount
name: harness-namespace-admin
namespace: harness
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: harness-namespace-admin-full-access
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
name: harness-clusterrole
labels:
app.kubernetes.io/name: harness
app.kubernetes.io/part-of: harness
rules:
- apiGroups:
- ""
resources:
- configmaps
- endpoints
- nodes
- pods
- secrets
verbs:
- list
- watch
- apiGroups:
- ""
resources:
- nodes
verbs:
- get
- apiGroups:
- ""
resources:
- services
verbs:
- get
- list
- watch
- apiGroups:
- "extensions"
resources:
- ingresses
verbs:
- get
- list
- watch
- apiGroups:
- ""
resources:
- events
verbs:
- create
- patch
- apiGroups:
- "extensions"
resources:
- ingresses/status
verbs:
- update
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: harness-serviceaccount
namespace: harness
labels:
app.kubernetes.io/name: harness
app.kubernetes.io/part-of: harness
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: harness-clusterrole-hsa-binding
labels:
app.kubernetes.io/name: harness
app.kubernetes.io/part-of: harness
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: harness-clusterrole
subjects:
- kind: ServiceAccount
name: harness-serviceaccount
namespace: harness
Kube Config File

Now that the namespace and service account are created, we need a kubeconfig file for the Ambassador to use that has the permissions of the harness-namespace-admin service account. You can use your own utility to create the kubeconfig file, or use the script below.

  1. Copy and paste the following into a file called generate_kubeconfig.sh:
#!/bin/bash
set -e
set -o pipefail

SERVICE_ACCOUNT_NAME=harness-namespace-admin
NAMESPACE=harness
KUBECFG_FILE_NAME="./kube/k8s-${SERVICE_ACCOUNT_NAME}-${NAMESPACE}-conf"
TARGET_FOLDER="./kube/"

create_target_folder() {
echo -n "Creating target directory to hold files in ${TARGET_FOLDER}..."
mkdir -p "${TARGET_FOLDER}"
printf "done"
}

get_secret_name_from_service_account() {
echo -e "\\nGetting secret of service account ${SERVICE_ACCOUNT_NAME}-${NAMESPACE}"
SECRET_NAME=$(kubectl get sa "${SERVICE_ACCOUNT_NAME}" --namespace "${NAMESPACE}" -o json | jq -r '.secrets[].name')
echo "Secret name: ${SECRET_NAME}"
}

extract_ca_crt_from_secret() {
echo -e -n "\\nExtracting ca.crt from secret..."
kubectl get secret "${SECRET_NAME}" --namespace "${NAMESPACE}" -o json | jq \
-r '.data["ca.crt"]' | base64 -D > "${TARGET_FOLDER}/ca.crt"
printf "done"
}

get_user_token_from_secret() {
echo -e -n "\\nGetting user token from secret..."
USER_TOKEN=$(kubectl get secret "${SECRET_NAME}" \
--namespace "${NAMESPACE}" -o json | jq -r '.data["token"]' | base64 -D)
printf "done"
}

set_kube_config_values() {
context=$(kubectl config current-context)
echo -e "\\nSetting current context to: $context"

CLUSTER_NAME=$(kubectl config get-contexts "$context" | awk '{print $3}' | tail -n 1)
echo "Cluster name: ${CLUSTER_NAME}"

ENDPOINT=$(kubectl config view \
-o jsonpath="{.clusters[?(@.name == \"${CLUSTER_NAME}\")].cluster.server}")
echo "Endpoint: ${ENDPOINT}"

# Set up the config
echo -e "\\nPreparing k8s-${SERVICE_ACCOUNT_NAME}-${NAMESPACE}-conf"
echo -n "Setting a cluster entry in kubeconfig..."
kubectl config set-cluster "${CLUSTER_NAME}" \
--kubeconfig="${KUBECFG_FILE_NAME}" \
--server="${ENDPOINT}" \
--certificate-authority="${TARGET_FOLDER}/ca.crt" \
--embed-certs=true

echo -n "Setting token credentials entry in kubeconfig..."
kubectl config set-credentials \
"${SERVICE_ACCOUNT_NAME}-${NAMESPACE}-${CLUSTER_NAME}" \
--kubeconfig="${KUBECFG_FILE_NAME}" \
--token="${USER_TOKEN}"

echo -n "Setting a context entry in kubeconfig..."
kubectl config set-context \
"${SERVICE_ACCOUNT_NAME}-${NAMESPACE}-${CLUSTER_NAME}" \
--kubeconfig="${KUBECFG_FILE_NAME}" \
--cluster="${CLUSTER_NAME}" \
--user="${SERVICE_ACCOUNT_NAME}-${NAMESPACE}-${CLUSTER_NAME}" \
--namespace="${NAMESPACE}"

echo -n "Setting the current-context in the kubeconfig file..."
kubectl config use-context "${SERVICE_ACCOUNT_NAME}-${NAMESPACE}-${CLUSTER_NAME}" \
--kubeconfig="${KUBECFG_FILE_NAME}"
}

create_target_folder
get_secret_name_from_service_account
extract_ca_crt_from_secret
get_user_token_from_secret
set_kube_config_values

echo -e "\\nAll done! Testing with:"
echo "KUBECONFIG=${KUBECFG_FILE_NAME} kubectl get pods -n ${NAMESPACE}"
KUBECONFIG=${KUBECFG_FILE_NAME} kubectl get pods -n ${NAMESPACE}

When you execute this script it will create a folder called kube that contains a file named k8s-harness-namespace-admin-harness-conf.

This file needs to be copied on the Ambassador (see below) VM as $HOME/.kube/config.

Docker Repository Setup

A Docker repository must be created on a repository server to host Docker images of Harness microservices. The internal URL of the Docker repository must be sent to Harness as instructed in the Installation section.

Ambassador Virtual Machine Setup

The Harness Ambassador is the only required component that needs outbound only connectivity to app.harness.io:443. The Harness Cloud uses Ambassador to manage the installation of Harness On-Premise. Ambassador has following connectivity and access requirements:

  • Connectivity to the Harness Cloud: Connectivity to the Harness Cloud via hostname app.harness.io and port 443. The following command will check connectivity to the Harness Cloud:
nc -vz app.harness.io 443
  • Docker login: The Docker login command should be run with credentials that can access the repository created for the Harness microservices. This command allows the Ambassador to push Docker images to the local repository during installation. The following command must be run after replacing the repository URL:
docker login repository_url
  • kubectl command: The kubectl command interface should be preinstalled and preconfigured to connect to the harness namespace in the given Kubernetes cluster. By default, the kubectl command uses the config hosted at $HOME/.kube/config location. Ensure this file is created according to the instructions in the Kube Config File section. The following commands can be run on the VM to ensure connectivity and access to the given namespace of the cluster.
kubectl get deploy -n harness
kubectl get pods -n harness
kubectl get hpa -n harness
kubectl get configmap -n harness
kubectl get secret -n harness
kubectl get role -n harness
kubectl get rolebinding -n harness
  • Secret: Create a secret in the harness namespace to access the internal Docker repository. This secret can be referenced by the Harness microservices installed later on in Installation. Below is a sample Shell script to help create the secret. Create the secret.sh, replace the repository URL, user, password, and email, and then execute the script.
#!/bin/bash
# Please replace placeholders with real values

REGISTRY_URL="<your-repo-url>"
REGISTRY_USER="<user>"
REGISTRY_PASS="<password>"
REGISTRY_EMAIL="<email>"

cat <<EOF | kubectl apply -f -

apiVersion: v1
stringData:
.dockercfg: '{"$REGISTRY_URL":{"username":"$REGISTRY_USER","password":"$REGISTRY_PASS","email":"$REGISTRY_EMAIL"}}'
kind: Secret
metadata:
name: regcred
namespace: harness
type: kubernetes.io/dockercfg

EOF
Ensure that you replace the repository URL, user, password, and email in the previous script.

Load Balancer Setup

The load balancer must be preconfigured to connected to the static node port of the Ingress controller in the harness namespace. Also, the internal URL must to be provided to Harness as instructed in the Installation section.

Installation

Once the infrastructure is prepared (per the Prerequisites listed above), Harness installation in your local Kubernetes cluster involves the following steps:

  1. Email the following details to the Harness Team:
    1. Company Name.
    2. Account Name (this can be same as company name).
    3. Primary Admin Email Address.
    4. Repository URL (internal URL, used by the Ambassador and Kubernetes cluster).
    5. Load Balancer URL (internal URL, used by the Ambassador and Kubernetes cluster).
  2. Harness provides the script to run on the Ambassador. This script includes the signed URL that has access to download the binaries and start the Ambassador process. Once the process is completed, notify the Harness Team about the success of this step.
  3. After validating the connectivity of the Ambassador to the Cloud, the Harness Team triggers the installation of the platform on the local Kubernetes cluster, as configured.

Verification

To verify installation, open the Load Balancer URL in your browser. The Harness On-Prem application opens successfully.

Backing Up Data

Harness customers are responsible for setting up MongoDB data backup. There are two backup options, documented by MongoDB:

Next Steps

  1. Install the Harness Delegate: Delegate Installation and Management.
  2. Set up an SMTP Collaboration Provider in Harness for email notifications from the Harness Manager: Add Collaboration Providers.


How did we do?