OpenShift Connected On-Prem Setup

Updated 3 weeks ago by Michael Cretzman

In addition to the Harness SaaS offering, you can run the Harness Continuous Delivery-as-a-Service platform on-premise, in a OpenShift cluster.

Architecture

Here is the high-level architecture of Harness Connected On-Premise with OpenShift:

The main components are:

  • OpenShift cluster hosting the microservices used by Harness, such as the Harness Manager, MongoDB, Verification, and Machine Learning Engine.
  • Docker repo used to host the Harness microservice images pulled.
  • Harness Ambassador hosted on a Virtual Machine (VM) that makes a secure outbound connection to the Harness Cloud to download binaries.

Prerequisites

This section lists the infrastructure requirements that must be in place before following the steps in Installation.

System Requirements

  • OpenShift cluster (v3.11) with a dedicated project: harness
  • 28 cores, 44 GB RAM, 640 GB Disk

Micro-service

Pods

CPU

Memory (GB)

Manager

2

4

10

Verification

2

2

6

Machine Learning Engine

1

8

2

UI

2

0.5

0.5

MongoDB

3

12

24

Proxy

1

0.5

0.5

Ingress

2

0.5

0.5

Total

14

27.5

43.5

 

  • Storage volume (200 GB) for each Mongo pod. (600 GB total)
  • Storage volume (10 GB) for the Proxy pod. 
  • Docker Repository (for Harness microservice images). This repository could be either:
    • Integrated OpenShift Container Registry (OCR)
    • Internal Hosted Repository
  • 1 core, 6GB RAM, 20 GB Disk.
  • Root access on localhost is required only for Docker. If root access is not possible, please follow the instructions on how to run Docker as a non-root user: https://docs.docker.com/install/linux/linux-postinstall/.

Connectivity and Access Requirements

  • Load balancer to OpenShift cluster.
  • OpenShift cluster to Docker repository to pull images.
  • Ambassador to OpenShift cluster to run kubectl and oc commands.
  • Ambassador to Docker repository to push Harness images.
  • Ambassador to load balancer.
  • Ambassador to app.harness.io (port: 443).

OpenShift Cluster Setup

The following resources must be created in the OpenShift cluster:

  • Project - Must be named harness. You can use the following command:
 oc new-project harness
  • Service account - The Kubernetes Service account must have all permissions within the harness project.
  • Role - A role that provides access on the required API groups (apiGroups).
  • RoleBinding - The RoleBinding between the service account and the role that provides access on the required API groups.
  • StorageClass - The Harness installer will leverage the StorageClass configured in the cluster.

YAML for OpenShift

The following YAML code can be executed by the cluster admin to create the necessary resources using kubectl apply or oc apply commands.

oc apply -n harness -f <FILENAME>.yaml

Here are the contents of <FILENAME>.yaml:

apiVersion: v1
kind: ServiceAccount
metadata:
name: harness-namespace-admin
namespace: harness
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: harness-namespace-admin-full-access
namespace: harness
rules:
- apiGroups: ["", "extensions", "apps", "autoscaling", "rbac.authorization.k8s.io", "roles.rbac.authorization.k8s.io", "route.openshift.io"]
resources: ["*"]
verbs: ["*"]
- apiGroups: ["batch"]
resources:
- jobs
- cronjobs
verbs: ["*"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: harness-namespace-admin-view
namespace: harness
subjects:
- kind: ServiceAccount
name: harness-namespace-admin
namespace: harness
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: harness-namespace-admin-full-access
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: harness-clusterrole
labels:
app.kubernetes.io/name: harness
app.kubernetes.io/part-of: harness
rules:
- apiGroups:
- ""
resources:
- configmaps
- endpoints
- nodes
- pods
- secrets
verbs:
- list
- watch
- apiGroups:
- ""
resources:
- nodes
verbs:
- get
- apiGroups:
- ""
resources:
- services
verbs:
- get
- list
- watch
- apiGroups:
- "extensions"
resources:
- ingresses
verbs:
- get
- list
- watch
- apiGroups:
- ""
resources:
- events
verbs:
- create
- patch
- apiGroups:
- "extensions"
resources:
- ingresses/status
verbs:
- update
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: harness-serviceaccount
namespace: harness
labels:
app.kubernetes.io/name: harness
app.kubernetes.io/part-of: harness
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: harness-clusterrole-hsa-binding
labels:
app.kubernetes.io/name: harness
app.kubernetes.io/part-of: harness
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: harness-clusterrole
subjects:
- kind: ServiceAccount
name: harness-serviceaccount
namespace: harness

Security Context Constraints (SCCs) 

Allow service accounts (created above) in “harness” project to run Docker images as any user:

oc adm policy add-scc-to-user anyuid -z harness-serviceaccount -n harnessoc adm policy add-scc-to-user anyuid -z default -n harness

 Kube Config File

Now that the project and service account are created, we need a kubeconfig file for the Ambassador to use that has the permissions of the harness-namespace-admin service account. You can use your own utility to create the kubeconfig file, or use the script below.

In the following script, generate_kubeconfig.sh, you need to add the secret name of the harness-namespace-admin service account in the SECRET_NAME variable. To get the secret name associated with the harness-namespace-admin service account, run this command:

oc get sa harness-namespace-admin -n harness -o json | jq -r '.secrets'

From the output of this command, pick the secret other than docker cfg secret.

  1. Copy and paste the following into a file called generate_kubeconfig.sh. 
set -e
set -o pipefail

SERVICE_ACCOUNT_NAME=harness-namespace-admin
NAMESPACE=harness
SECRET_NAME=<Put here the name of the secret associated with SERVICE_ACCOUNT_NAME>
KUBECFG_FILE_NAME="./kube/k8s-${SERVICE_ACCOUNT_NAME}-${NAMESPACE}-conf"
TARGET_FOLDER="./kube/"

create_target_folder() {
echo -n "Creating target directory to hold files in ${TARGET_FOLDER}..."
mkdir -p "${TARGET_FOLDER}"
printf "done"
}

extract_ca_crt_from_secret() {
echo -e -n "\\nExtracting ca.crt from secret..."
kubectl get secret "${SECRET_NAME}" --namespace "${NAMESPACE}" -o json | jq \
-r '.data["ca.crt"]' | base64 -d > "${TARGET_FOLDER}/ca.crt"
printf "done"
}

get_user_token_from_secret() {
echo -e -n "\\nGetting user token from secret..."
USER_TOKEN=$(kubectl get secret "${SECRET_NAME}" \
--namespace "${NAMESPACE}" -o json | jq -r '.data["token"]' | base64 -d)
printf "done"
}

set_kube_config_values() {
context=$(kubectl config current-context)
echo -e "\\nSetting current context to: $context"

CLUSTER_NAME=$(kubectl config get-contexts "$context" | awk '{print $3}' | tail -n 1)
echo "Cluster name: ${CLUSTER_NAME}"

ENDPOINT=$(kubectl config view \
-o jsonpath="{.clusters[?(@.name == \"${CLUSTER_NAME}\")].cluster.server}")
echo "Endpoint: ${ENDPOINT}"

# Set up the config
echo -e "\\nPreparing k8s-${SERVICE_ACCOUNT_NAME}-${NAMESPACE}-conf"
echo -n "Setting a cluster entry in kubeconfig..."
kubectl config set-cluster "${CLUSTER_NAME}" \
--kubeconfig="${KUBECFG_FILE_NAME}" \
--server="${ENDPOINT}" \
--certificate-authority="${TARGET_FOLDER}/ca.crt" \
--embed-certs=true

echo -n "Setting token credentials entry in kubeconfig..."
kubectl config set-credentials \
"${SERVICE_ACCOUNT_NAME}-${NAMESPACE}-${CLUSTER_NAME}" \
--kubeconfig="${KUBECFG_FILE_NAME}" \
--token="${USER_TOKEN}"

echo -n "Setting a context entry in kubeconfig..."
kubectl config set-context \
"${SERVICE_ACCOUNT_NAME}-${NAMESPACE}-${CLUSTER_NAME}" \
--kubeconfig="${KUBECFG_FILE_NAME}" \
--cluster="${CLUSTER_NAME}" \
--user="${SERVICE_ACCOUNT_NAME}-${NAMESPACE}-${CLUSTER_NAME}" \
--namespace="${NAMESPACE}"

echo -n "Setting the current-context in the kubeconfig file..."
kubectl config use-context "${SERVICE_ACCOUNT_NAME}-${NAMESPACE}-${CLUSTER_NAME}" \
--kubeconfig="${KUBECFG_FILE_NAME}"
}

create_target_folder
extract_ca_crt_from_secret
get_user_token_from_secret
set_kube_config_values

echo -e "\\nAll done! Testing with:"
echo "KUBECONFIG=${KUBECFG_FILE_NAME} kubectl get pods -n ${NAMESPACE}"
KUBECONFIG=${KUBECFG_FILE_NAME} kubectl get pods -n ${NAMESPACE}

When you execute this script it will create a folder called kube that contains a file named k8s-harness-namespace-admin-harness-conf. This file needs to be copied on the Ambassador VM as $HOME/.kube/config (see below).

Docker Repository Setup

A Docker repository must be created on a repository server to host Docker images of Harness microservices. Integrated OpenShift Container Registry (OCR) can also be used to host Docker images of Harness microservices.

The internal URL of the Docker repository must be sent to Harness as instructed in the Installation section.

Ambassador Virtual Machine Setup

The Harness Ambassador is the only required component that needs outbound only connectivity to app.harness.io:443. The Harness Cloud uses Ambassador to manage the installation of Harness On-Prem. Ambassador has the following connectivity and access requirements.

Connectivity to the Harness Cloud

Connectivity to the Harness Cloud via hostname app.harness.io and port 443. The following command will check connectivity to the Harness Cloud:

nc -vz app.harness.io 443
Docker login

The Docker login command should be run with credentials that can access the repository created for the Harness microservices. This command allows the Ambassador to push Docker images to the local repository during installation.

Integrated OpenShift Container Registry (OCR):

docker login -p <token> -u unused <ocr_url>/harness

The token used in above login command should meet the following requirements:

  1. The token should be for a regular user and not a service account.
  2. The user used to log into OCR should have following roles:
    1. registry-viewer
    2. registry-editor
    3. system:image-builder

The following command can be used to assign the above roles:

 oc policy add-role-to-user -n harness <role_name> <user> 

Other Docker Repository:

 docker login repository_url
kubectl Command

The kubectl command interface should be preinstalled and preconfigured to connect to the harness project in the given OpenShift cluster. 

By default, the kubectl command uses the config hosted at $HOME/.kube/config location. Ensure this file is created according to the instructions in the Kube Config File section. The following commands can be run on the VM to ensure connectivity and access to the given namespace of the cluster.

kubectl get deploy -n harness

kubectl get pods -n harness

kubectl get hpa -n harness

kubectl get configmap -n harness

kubectl get secret -n harness
Secret

Create a secret in the harness project to access the internal Docker repository. This secret can be referenced by the Harness microservices installed later on in Installation

Integrated OpenShift Container Registry (OCR):

oc create secret -n harness docker-registry regcred --docker-server=<repo_url> --docker-username=unused --docker-password=<password>

The password used in creating the regcred secret should meet the following requirements:

  1. The password can be of a regular user and service account. 
  2. The user or service account used to log into OCR should have the following roles:
    1. registry-viewer
    2. registry-editor
    3. system:image-builder

As a recommendation, the registry credentials of the service account harness-namespace-admin can be used in the above command. To get these credentials, run this command:

oc get secret -n harness $(oc get sa harness-namespace-admin -n harness -o json | jq -r '.imagePullSecrets[0].name') -o json | jq -r '.data.".dockercfg"' | base64 -d | jq -r '[.[]][0].password'

Copy the password from the output of the above command and use it to create the secret.

Other Docker Repository:

oc create secret -n harness docker-registry regcred --docker-server=<repo_url> --docker-username=<user> --docker-password=<password>

 Load Balancer Setup

The load balancer must be preconfigured to connected to the Ingress controller in the harness namespace. Also, the internal URL must be provided to Harness as instructed in the Installation section.

Installation

Once the infrastructure is prepared (per the Prerequisites listed above), Harness installation in your OpenShift cluster involves the following steps:

  1. Email the following details to the Harness Team:
    1. Company Name.
    2. Account Name (this can be the same as the company name).
    3. Primary Admin Email Address.
    4. Repository URL (internal URL, used by the Ambassador and OpenShift cluster).
    5. Load Balancer URL (internal URL, used by the Ambassador and OpenShift cluster).
    6. Storage Class configured in the cluster.
  2. Harness provides the script to run on the Ambassador. This script includes the signed URL that has access to download the binaries and start the Ambassador process. Once the process is completed, notify the Harness Team about the success of this step.
  3. After validating the connectivity of the Ambassador to the Cloud, the Harness Team triggers the installation of the platform on the OpenShift cluster, as configured.

Verification

To verify installation, open the Load Balancer URL in your browser. The Harness On-Prem application opens successfully.

Backing Up Data

Harness customers are responsible for setting up MongoDB data backup. See Connected On-Prem Backup and Restore Strategy.

The two backup options are also documented by MongoDB:


How did we do?