Kubernetes Cluster On-Prem: Kubernetes Cluster Setup

Updated 2 weeks ago by Michael Cretzman

This topic covers installing Harness Kubernetes Cluster On-Prem in an existing Kubernetes cluster.

We assume that you are very familiar with Kubernetes, and can perform the standard Kubernetes and managing configurations using Kustomize overlays.

Harness Kubernetes Cluster On-Prem uses the KOTS kubectl plugin for installation. This topic covers installing KOTS in your existing cluster as part of setting up Harness On-Prem.

Installing Harness On-Prem into an existing Kubernetes cluster is a simple process where you prepare your existing cluster and network, and use the KOTS admin tool and Kustomize to complete the installation and deploy Harness.

In this topic:

Cluster Requirements

Do not perform any of the steps in this topic until you have set up the requirements in the Kubernetes Cluster On-Prem: Infrastructure Requirements topic.

Summary

Installing Harness in an existing cluster is performed as a KOTS Existing Cluster Online Install.

This simply means that you are using an existing Kubernetes cluster, as opposed to bare metal or VMs, and that your cluster can make outbound internet requests for an online installation.

Step 1: Set up Cluster Requirements

As stated earlier, follow the steps in the Kubernetes Cluster On-Prem: Infrastructure Requirements topic to ensure you have your cluster set up correctly.

These requirements also include RBAC settings that might require your IT administrator to assist you unless your user account is bound to the cluster-admin Cluster Role.

Specifically, you need to create a KOTS admin Role and bind it to the user that will install Harness. You also need to create a Harness ClusterRole.

Step 2: Set Up Networking Requirements

Perform the following steps to ensure that you have the Load Balancer set up for Harness On-Prem.

Later, when you set up the kustomization for Harness On-Prem, you will provide an IP address for the cluster load balancer settings.

Finally, when you configure the Harness On-Prem application, you will provide the Load Balancer URL. This URL is what Harness On-Prem users will use.

Using NodePort?

If you are creating the load balancer's Service type using NodePort, create a load balancer that points to any port in range 30000-32767 on the node pool on which the Kubernetes cluster is running.

If you are using NodePort, you can skip to Step 3: Install KOTS.

Set Up Static External IP

You should have a static IP reserved to expose Harness outside of the Kubernetes cluster.

For example, in the GCP console, click VPC network, and then click External IP Addresses.

For more information, see Reserving a static external IP address.

For GCP, the External IP address must be Premium Tier.

Set Up DNS

Set up DNS to resolve the domain name you want to use for Harness On-Prem to the static IP address you reserved in the previous step.

For example, the domain name harness.abc.com would resolve to the static IP:

host harness.abc.com
harness.abc.com has address 192.0.2.0

The above DNS setup can be tested by running host <domain_name>.

Review: OpenShift Clusters

If you will be using OpenShift Clusters, run the following commands after installing the KOTS plugin, but before installing Harness:

oc adm policy add-scc-to-user anyuid -z harness-serviceaccount -n harness
oc adm policy add-scc-to-user anyuid -z harness-default -n harness
oc adm policy add-scc-to-user anyuid -z default -n harness

Once you've installed Harness and you want to install a Harness Kubernetes Delegate, see Delegates and OpenShift.

Option 1: Disconnected Installation (Airgap)

The following steps will install KOTS from your private repo and the Harness On-Prem license and airgap file you obtain from Harness.

  1. Download the latest KOTS (kotsadm.tar.gz) release from https://github.com/replicatedhq/kots/releases.
  2. Push KOTS images to your private registry:
    kubectl kots admin-console push-images ./kotsadm.tar.gz <private.registry.host>/harness \
    --registry-username <rw-username> \
    --registry-password <rw-password>
  3. Obtain the Harness license file from your Harness Customer Success contact or email support@harness.io.
  4. Obtain the Harness airgap file from Harness.
  5. Log into your cluster.
  6. Install KOTS and Harness using the following command:
kubectl kots install harness 
--namespace harness
--shared-password <password>
--license-file <path to license.yaml>
--config-values <path to configvalues.yaml>
--airgap-bundle <path to harness-<version>.airgap>
--kotsadm-registry <private.registry.url>
--registry-username <rw-username>
--registry-password <rw-password>

Notes:

  • The --namespace parameter uses the namespace you created in Kubernetes Cluster On-Prem: Infrastructure Requirements. in this documentation, we use the namespace harness.
  • For the --shared-password parameter, enter a password for the KOTS admin console. You will use this password to log into the KOTS admin tool.
  • The --config-values parameter is only needed if you use a config-values files, as described in Config Values from KOTS.

In the terminal, it will look like this:

  • Deploying Admin Console
• Creating namespace ✓
• Waiting for datastore to be ready ✓

The KOTS admin tool URL is provided:

  • Waiting for Admin Console to be ready ✓  

• Press Ctrl+C to exit
• Go to http://localhost:8800 to access the Admin Console

Use the URL provided in the output to open the KOTS admin console in a browser.

Enter the password you provided earlier, and click Log In.

You might be prompted to allow a port-forward connection into the cluster.

Now that KOTS and Harness are installed, you can perform the necessary configurations.

Option 2: Connected Installation

The following steps will install KOTS and Harness On-Prem online. There is also an option to use a Harness On-Prem airgap file instead of downloading Harness On-Prem.

Install KOTS Plugin

  1. Log into your cluster.
  2. Install the KOTS kubectl plugin using the following command:

curl https://kots.io/install | bash

The output of the command is similar to this:

Installing replicatedhq/kots v1.16.1
(https://github.com/replicatedhq/kots/releases/download/v1.16.1/kots_darwin_amd64.tar.gz)...
############################################# 100.0%#=#=-# #
############################################# 100.0%
Installed at /usr/local/bin/kubectl-kots

To test the installation, run this command:

kubectl kots --help

The KOTS help appears.

Now that KOTS is installed, you can install Harness On-Prem into your cluster.

Install KOTS

To install the KOTS Admin tool, enter the following command:

kubectl kots install harness

You are prompted to enter the namespace for the Harness installation. This is the namespace you created in Kubernetes Cluster On-Prem: Infrastructure Requirements.

In this documentation, we use the namespace harness.

In the terminal, it will look like this:

Enter the namespace to deploy to: harness
• Deploying Admin Console
• Creating namespace ✓
• Waiting for datastore to be ready ✓

Enter a password for the KOTS admin console and hit Enter. You will use this password to log into the KOTS admin tool.

The KOTS admin tool URL is provided:

Enter a new password to be used for the Admin Console: ••••••••
• Waiting for Admin Console to be ready ✓

• Press Ctrl+C to exit
• Go to http://localhost:8800 to access the Admin Console

Use the URL provided in the output to open the KOTS admin console in a browser.

Enter the password you provided earlier, and click Log In.

You might be prompted to allow a port-forward connection into the cluster.

Upload Your Harness License

Once you are logged into the KOTS admin console, you can upload your Harness license.

Obtain the Harness license file from your Harness Customer Success contact or email support@harness.io.

Drag your license YAML file into the KOTS admin tool:

Next, upload the license file:

Click Upload license.

Now that license file is uploaded, you can install Harness.

Download Harness over the Internet

If you are installing Harness over the Internet, click the download Harness from the Internet link.

KOTS begins installing Harness into your cluster.

Next, you will configure Harness.

Step 3: Configure Harness

Now that you have added your license you can configure the networking for the Harness installation.

If the KOTS Admin tool is not running, point kubectl to the cluster where Harness is deployed and run the following command: kubectl kots admin-console --namespace harness

In the KOTS admin tool, the Configure Harness settings appear.

Kubernetes Cluster On-Prem requires that you provide a NodePort and Application URL.

Mode

  • Select Demo to run a On-Prem in demo mode and experiment with it.
  • Select Production HA to run a production version of On-Prem.

Ingress Service Type

By default, nginx is used for Ingress automatically. If you are deploy nginx separately, do the following:

  1. Click Advanced Configurations.
  2. Disable the Install Nginx Ingress Controller option.
External Traffic Policy

You can select the external traffic policy for the Ingress controller.

Select one of the following:

  • Cluster
  • Local

For information on how these options work, see Preserving the client source IP from Kubernetes.

NodePort

Enter any port in the range 30000-32767 on the node pool on which the Kubernetes cluster is running.

If you do not enter a port, Harness uses 32500 by default.

External Loadbalancer

Enter the IP address of the Load Balancer.

Application URL

Enter the URL users will enter to access Harness. This is the DNS domain name mapped to the Load Balancer IP.

When you are done, click Continue.

Host Name

The hostname is the DNS name or IP address of the Load Balancer.

Storage Class

You can also add a Storage Class. The name of the Storage Class depends on the provider where you are hosting your Kubernetes cluster. See Storage Classes from Kubernetes.

If you don't provide a name, Harness will use default.

Once you have On-Prem installed, you can just run the following command to get the Storage Classes in the namespace (in this example, harness):

kubectl get storageclass -n harness

Enter the name of the Storage Class.

Option: Advanced Configurations

In the Advanced Configurations section, there are a number of advanced settings you can configure. If this is the first time you are setting up Harness On-Prem, there's no reason to fine tune the installation with these settings.

You can change the settings later in the KOTS admin console's Config tab:

Step 4: Perform Preflight Checks

Preflight checks run automatically and verify that your setup meets the minimum requirements.

You can skip these checks, but we recommend you let them run.

Fix any issues in the preflight steps. A common example is the message:

Your cluster meets the minimum version of Kubernetes, but we recommend you update to 1.15.0 or later.

You can update your cluster's version of Kubernetes if you like.

Step 5: Deploy Harness

When you are finished pre-flight checks, click Deploy and Continue.

Harness is deployed in a few minutes.

In a new browser tab, go to the following URL, replacing <LB_URL> with the URL you entered in the Application URL setting in the KOTS admin console:

<LB_URL>/#/onprem-signup

For example:

http://harness.mycompany.com/#/onprem-signup

The Harness sign up page appears.

Sign up with a new account and then log in. Your new account will be added to the Harness Account Administrators User Group.

See Managing Users and Groups (RBAC).

Future Versions

To set up future versions of On-Prem, in the KOTS admin console, in the Version history tab, click Deploy. The new version is displayed in Deployed version.

Important Next Steps

Important: You cannot invite other users to Harness until a Harness Delegate is installed and a Harness SMTP Collaboration Provider is configured.
  1. Install the Harness Delegate: Delegate Installation Overview.
  2. Set up an SMTP Collaboration Provider in Harness for email notifications from the Harness Manager: Add SMTP Collaboration Provider.
    Ensure you open the correct port for your SMTP provider, such as Office 365.
  3. Add a Harness Secrets Manager. By default, On-Prem installations use the local Harness MongoDB for the default Harness Secrets Manager. This is not recommended.After On-Prem installation, configure a new Secret Manager (Vault, AWS, etc). You will need to open your network for the Secret Manager connection.

Delegates and OpenShift

If you are deploying the Harness Kubernetes Delegate into an OpenShift cluster, you need to edit the Harness Kubernetes Delegate YAML before installing the Delegate.

You simply need to point to the OpenShift image.

Here's the default YAML with harness/delegate:latest:

...
apiVersion: apps/v1
kind: StatefulSet
...
spec:
containers:
- image: harness/delegate:latest

Change the image entry to harness/delegate:non-root-openshift:

...
apiVersion: apps/v1
kind: StatefulSet
...
spec:
containers:
- image: harness/delegate:non-root-openshift

Updating Harness

Do not upgrade Harness past 4 major releases. Instead, upgrades each interim release until you upgrade to the latest release. A best practice is to upgrade Harness once a month.

Please follow these steps to update your Harness On-Prem installation.

The steps are very similar to how you installed Harness initially.

For more information on updating KOTS and applications, see Using CLI and Updating the Admin Console from KOTS.

Disconnected (Airgap)

The following steps require a private registry, just like the initial installation of Harness.

Upgrade Harness
  1. Download the latest release from Harness.
  2. Run the following command on the cluster hosting Harness, replacing the placeholders:
kubectl kots upstream upgrade harness \ 
--airgap-bundle <path to harness-<version>.airgap> \
--kotsadm-registry <private.registry.url> \
--registry-username <username> \
--registry-password <password> \
--deploy \
-n harness
Upgrade KOTS Admin Tool

To upgrade the KOTS admin tool, first you will push images to your private Docker registry.

  1. Run the following command to push the images, replacing the placeholders:
kubectl kots admin-console push-images ./<new-kotsadm>.tar.gz \
<private.registry.host>/harness \
--registry-username rw-username \
--registry-password rw-password
  1. Next, run the following command on the cluster hosting Harness, replacing the placeholders:
kubectl kots admin-console upgrade \ 
--kotsadm-registry <private.registry.host>/harness \
--registry-username rw-username \
--registry-password rw-password \
-n harness

Connected

The following steps require a secure connection to the Internet, just like the initial installation of Harness.

Upgrade Harness
  1. Run the following command on the cluster hosting Harness:
kubectl kots upstream upgrade harness --deploy -n harness
Upgrade KOTS Admin Tool
  1. Run the following command on the cluster hosting Harness:
kubectl kots admin-console upgrade -n harness

Monitoring Harness

Harness monitoring is performed using the built in monitoring tools.

For steps on using the monitoring tools, see Prometheus from KOTS.

License Expired

If your license has expired, you will see something like the following:

Contact your Harness Customer Success representative or support@harness.io.

Bring Down Harness Cluster for Planned Downtime

If you need to bring down the Harness cluster for any reason, you simply scale down the Harness Manager and Verification Service deployments to zero replicas. That is sufficient to stop background tasks and remove connections to the database.

Next, optionally, you can scale everything else down if needed, but it is not necessary.

To bring Harness back up, first ensure the Harness MongoDB is scaled up to 3 instances and Redis is scaled up also. Next, scale up the Harness Manager and Verification Service.

Logging

For Kubernetes Cluster On-Prem, logs are available as standard out.

Use kubectl get logs on any pod to see the logs.

Enable TLS/SSL between MongoDB and Harness Components

You can now enable a TLS/SSL connection between the Harness On-Prem components (microservices) and the MongoDB database that is included in Harness On-Prem.

You can use public or self-signed certs.

Simply select True in Mongo Use SSL and then upload your ca.pem, client.pem, and mongo.pem files:

Instructions for creating self signed certs
# Create CA 

openssl req -passout pass:password -new -x509 -days 3650 -extensions v3_ca -keyout ca_private.pem -out ca.pem -subj "/CN=CA/OU=MONGO/O=HARNESS/L=SFO/ST=CA/C=US" -config /usr/local/etc/openssl@1.1/openssl.cnf

# Create Client key

openssl req -newkey rsa:4096 -nodes -out client.csr -keyout client.key -subj '/CN=MC/OU=MONGO_CLIENTS/O=HARNESS/L=SFO/ST=CA/C=US'
Generating a 4096 bit RSA private key

# Create Server key

openssl req -newkey rsa:4096 -nodes -out mongo.csr -keyout mongo.key -subj '/CN=*.mongodb-replicaset-chart.<namespace>.svc.cluster.local/OU=MONGO/O=HARNESS/L=SFO/ST=CA/C=US'
Generating a 4096 bit RSA private key


* Please change <namespace> to namespace name where Harness is installed / will be installed*

# Get Client crt

openssl x509 -passin pass:password -sha256 -req -days 365 -in client.csr -CA ca.pem -CAkey ca_private.pem -CAcreateserial -out client_signed.crt

# Get Server crt

openssl x509 -passin pass:password -sha256 -req -days 365 -in mongo.csr -CA ca.pem -CAkey ca_private.pem -CAcreateserial -out mongo_signed.crt -extensions v3_req -extfile <(
cat <<EOF
[ v3_req ]
subjectAltName = @alt_names

[ alt_names ]
DNS.1 = 127.0.0.1
DNS.2 = localhost
DNS.3 = *.mongodb-replicaset-chart.<namespace>.svc.cluster.local
EOF
)
# Please change <namespace> to namespace name where Harness is installed / will be installed

# Combine Client crt and key to get client.pem

cat client_signed.crt client.key >client.pem

# Combine Server crt and key to get mongo.pem

cat mongo_signed.crt mongo.key > mongo.pem

Now you have the ca.pem, client.pem and mongo.pem files.

Upload all files into the Mongo Use SSL settings.

Notes

Harness On-Prem installations do not currently support the Harness Helm Delegate.

Remove Previous kustomization for Ingress Controller

This option is only needed if you have installed Harness On-Prem previously. If this is a fresh install, you can go directly to Configure Harness.

If you have installed Harness On-Prem previously, you updated Harness manifests using kustomize for the ingress controller. This is no longer required.

Do the following to remove the kustomization as follows:

  1. If you are using a single terminal, close the KOTS admin tool (Ctrl+C).
  2. Ensure kubectl is pointing to the cluster.
  3. Run the following command:
kubectl kots download --namespace harness --slug harness
This example assumes we are installing Harness in a namespace named harness. Please change the namespace according to your configuration.

This command will download a folder named harness in your current directory.

  1. In the harness folder, open the file kustomization.yaml:
vi harness/overlays/midstream/kustomization.yaml 
  1. In patchesStrategicMerge remove nginx-service.yaml.
  2. Save the file.
  3. Remove the nginx-service.yaml file:
rm -rf harness/overlays/midstream/nginx-service.yaml
  1. Upload Harness:
kubectl kots upload --namespace harness --slug harness ./harness
  1. Open the KOTS admin tool, and then deploy the uploaded version of Harness.


How did we do?