Virtual Machine On-Prem: Installation Guide

Updated 1 day ago by Michael Cretzman

This topic covers installing Harness Virtual Machine On-Prem as a Kubernetes cluster embedded on your target VMs.

Installing Harness On-Prem into an embedded Kubernetes cluster is a simple process where you prepare your VMs and network, and use the Kubernetes installer kURL and the KOTS plugin to complete the installation and deploy Harness.

Once you have set up Harness on a VM, you can add additional worker nodes by simply running a command.

Harness On-Prem uses the open source Kubernetes installer kURL and the KOTS plugin for installation. See Install with kURL from kURL and Installing an Embedded Cluster from KOTS.

In this topic:

Step 1: Set up VM Requirements

Ensure that your VMs meet the requirements specified in Virtual Machine On-Prem: Infrastructure Requirements.

Different cloud platforms use different methods for grouping VMs (GCP instance groups, AWS target groups, etc). Set up your 3 VMs using the platform method that works best with the platform's networking processes.

Step 2: Set Up Load Balancer and Networking Requirements

Ensure that your networking meets the requirements specified in Virtual Machine On-Prem: Infrastructure Requirements.

You will need to have two load balancers, as described in the Virtual Machine On-Prem: Infrastructure Requirements.

One for routing traffic to the VMs and one for the in-cluster load balancer.

During installation, you are asked for the IP address of the in-cluster TCP load balancer first.

When you configure the Harness On-Prem application in the KOTS admin console, you are asked for the HTTP load balancer URL.

Option 1: Disconnected Installation

Disconnected Installation involves downloading the Harness On-Prem archive file onto a jump box, and then copying and the file to each On-Prem host VM you want to use.

One each VM, you extract and install Harness On-Prem.

On your jump box, run the following command to obtain the On-Prem file:

curl -LO https://kurl.sh/bundle/harness.tar.gz

Copy the file to a Harness On-Prem host and extract it (tar xvf harness.tar.gz).

On the VM, install Harness:

cat install.sh | sudo bash -s airgap ha

This will install the entire On-Prem Kubernetes cluster and all related microservices.

The ha parameter is used to set up high availability. If you are not using high availability, you can omit the parameter.

Provide Load Balancer Settings

First, you are prompted to provide the IP address of the TCP Load Balancer for the cluster HA:

The installer will use network interface 'ens4' (with IP address '10.128.0.25')
Please enter a load balancer address to route external and internal traffic to the API servers.
In the absence of a load balancer address, all traffic will be routed to the first master.
Load balancer address:

This is the TCP load balancer you created in Virtual Machine On-Prem: Infrastructure Requirements.

For example, here is a GCP TCP load balancer with its frontend forwarding rule using port 6443:

Enter the IP address and port of your TCP load balancer (for example, 10.128.0.50:6443), and press Enter. The installation process will continue. The installation process begins like this:

...
Fetching weave-2.5.2.tar.gz
Fetching rook-1.0.4.tar.gz
Fetching contour-1.0.1.tar.gz
Fetching registry-2.7.1.tar.gz
Fetching prometheus-0.33.0.tar.gz
Fetching kotsadm-1.16.0.tar.gz
Fetching velero-1.2.0.tar.gz
Found pod network: 10.32.0.0/22
Found service network: 10.96.0.0/22
...

Review Configuration Settings

Once the installation process is complete, KOTS provides you with several configuration settings and commands. Save these settings and commands.

  • KOTS admin console and password:
Kotsadm: http://00.000.000.000:8800
Login with password (will not be shown again): D1rgBIu21
If you need to reset your password later, see kots reset-password.
  • Prometheus, Grafana, and Alertmanager ports and passwords:
The UIs of Prometheus, Grafana and Alertmanager have been exposed on NodePorts 30900, 30902 and 30903 respectively.
To access Grafana use the generated user:password of admin:RF1KuqreN .
  • kubectl access to your cluster:
To access the cluster with kubectl, reload your shell:
bash -l
  • The command to add worker nodes to the installation:
To add worker nodes to this installation, run the following script on your other nodes:

curl -sSL https://kurl.sh/harness/join.sh | sudo bash -s kubernetes-master-address=10.128.0.24:6443 kubeadm-token=xxxxx kubeadm-token-ca-hash=shaxxxxxx kubernetes-version=1.15.3 docker-registry-ip=10.96.3.130

We will use this command later.

  • Add master nodes:
To add MASTER nodes to this installation, run the following script on your other nodes
curl -sSL https://kurl.sh/harness/join.sh | sudo bash -s kubernetes-master-address=34.71.32.244:6443 kubeadm-to
ken=c2yack.q7lt3z6yuevqlmtf kubeadm-token-ca-hash=sha256:9db504ecdee08ff6dfa3b299ce95302fe53dd632a2e9356c55e9272db7
2d60d1 kubernetes-version=1.15.3 cert-key=f0373e812e0657b4f727e90a7286c5b65539dfe7ee5dc535df0a1bcf74ad5c57 control-
plane docker-registry-ip=10.96.2.100

Log into the Admin Tool

In a browser, enter the Kotsadm link.

The browser displays a TLS warning.

Click Continue to Setup.

In the warning page, click Advanced, then click Proceed to continue to the admin console.

As KOTS uses a self-signed certification, but you can upload your own.

Upload your certificate or click Skip and continue.

Log into the console using the password provided in the installation output.

Upload Your Harness License

Once you are logged into the KOTS admin console, you can upload your Harness license.

Obtain the Harness license file from your Harness Customer Success contact or email support@harness.io.

Drag your license YAML file into the KOTS admin tool:

Next, upload the license file:

Now that license file is uploaded, you can install Harness.

Go to Step 3: Configure Harness.

Option 2: Connected Installation

Once you have your VMs and networking requirements set up, you can install Harness.

Log into one of your VMs, and then run the following command:

curl -sSL https://k8s.kurl.sh/harness | sudo bash -s ha

This will install the entire On-Prem Kubernetes cluster and all related microservices.

The -s ha parameter is used to set up high availability.

Provide Load Balancer Settings

First, you are prompted to provide the IP address of the TCP Load Balancer for the cluster HA:

The installer will use network interface 'ens4' (with IP address '10.128.0.25')
Please enter a load balancer address to route external and internal traffic to the API servers.
In the absence of a load balancer address, all traffic will be routed to the first master.
Load balancer address:

This is the TCP load balancer you created in Virtual Machine On-Prem: Infrastructure Requirements.

For example, here is a GCP TCP load balancer with its frontend forwarding rule using port 6443:

Enter the IP address and port of your TCP load balancer (for example, 10.128.0.50:6443), and press Enter. The installation process will continue. The installation process begins like this:

...
Fetching weave-2.5.2.tar.gz
Fetching rook-1.0.4.tar.gz
Fetching contour-1.0.1.tar.gz
Fetching registry-2.7.1.tar.gz
Fetching prometheus-0.33.0.tar.gz
Fetching kotsadm-1.16.0.tar.gz
Fetching velero-1.2.0.tar.gz
Found pod network: 10.32.0.0/22
Found service network: 10.96.0.0/22
...

Review Configuration Settings

Once the installation process is complete, KOTS provides you with several configuration settings and commands. Save these settings and commands.

  • KOTS admin console and password:
Kotsadm: http://00.000.000.000:8800
Login with password (will not be shown again): D1rgBIu21
If you need to reset your password later, see kots reset-password.
  • Prometheus, Grafana, and Alertmanager ports and passwords:
The UIs of Prometheus, Grafana and Alertmanager have been exposed on NodePorts 30900, 30902 and 30903 respectively.
To access Grafana use the generated user:password of admin:RF1KuqreN .
  • kubectl access to your cluster:
To access the cluster with kubectl, reload your shell:
bash -l
  • The command to add worker nodes to the installation:
To add worker nodes to this installation, run the following script on your other nodes:

curl -sSL https://kurl.sh/harness/join.sh | sudo bash -s kubernetes-master-address=10.128.0.24:6443 kubeadm-token=xxxxx kubeadm-token-ca-hash=shaxxxxxx kubernetes-version=1.15.3 docker-registry-ip=10.96.3.130

We will use this command later.

  • Add master nodes:
To add MASTER nodes to this installation, run the following script on your other nodes
curl -sSL https://kurl.sh/harness/join.sh | sudo bash -s kubernetes-master-address=34.71.32.244:6443 kubeadm-to
ken=c2yack.q7lt3z6yuevqlmtf kubeadm-token-ca-hash=sha256:9db504ecdee08ff6dfa3b299ce95302fe53dd632a2e9356c55e9272db7
2d60d1 kubernetes-version=1.15.3 cert-key=f0373e812e0657b4f727e90a7286c5b65539dfe7ee5dc535df0a1bcf74ad5c57 control-
plane docker-registry-ip=10.96.2.100

Log into the Admin Tool

In a browser, enter the Kotsadm link.

The browser displays a TLS warning.

Click Continue to Setup.

In the warning page, click Advanced, then click Proceed to continue to the admin console.

As KOTS uses a self-signed certification, but you can upload your own.

Upload your certificate or click Skip and continue.

Log into the console using the password provided in the installation output.

Upload Your Harness License

Once you are logged into the KOTS admin console, you can upload your Harness license.

Obtain the Harness license file from your Harness Customer Success contact or email support@harness.io.

Drag your license YAML file into the KOTS admin tool:

Next, upload the license file:

Now that license file is uploaded, you can install Harness.

Download Harness over the Internet

If you are installing Harness over the Internet, click the download Harness from the Internet link.

KOTS begins installing Harness into your cluster.

Next, you will provide KOTS with the Harness configuration information (Load Balancer URL and NodePort).

Step 3: Configure Harness

Virtual Machine On-Prem requires that you provide a Load Balancer URL and NodePort.

  1. In Load Balancer URL, enter the full URL for the HTTP load balancer you set up for routing external traffic to your VMs.
    Include the scheme and hostname/IP. For example, https://app.example.com.
    Typically, this is the frontend IP address for the load balancer. For example, here is an HTTP load balancer in GCP and how you enter its information into Harness Configuration.
    If you have set up DNS to resolve a domain name to the load balancer IP, enter that domain name in Load Balancer URL.
  2. In NodePort, enter the port number you set up for load balancer backend: 7143.
  3. When you are done, click Continue.

Option: Advanced Configurations

In the Advanced Configurations section, there are a number of advanced settings you can configure. If this is the first time you are setting up Harness On-Prem, there's no reason to fine tune the installation with these settings.

You can change the settings later in the KOTS admin console's Config tab:

Step 4: Perform Preflight Checks

Preflight checks run automatically and verify that your setup meets the minimum requirements.

You can skip these checks, but we recommend you let them run.

Fix any issues in the preflight steps.

When you are finished pre-flight checks, click Continue.

The Harness application appears.

Step 5: Deploy Harness

In the KOTS admin console, in the Version history tab, click Deploy. The new version is displayed in Deployed version.

In a new browser tab, go to the following URL, replacing <LB_URL> with the URL you entered in the Load Balancer URL setting in the KOTS admin console:

<LB_URL>/#/onprem-signup

For example:

http://35.194.31.219/#/onprem-signup

The Harness sign up page appears.

Sign up with a new account and then log in. Your new account will be added to the Harness Account Administrators User Group.

See Managing Users and Groups (RBAC).

Step 6: Add Worker Nodes

Now that Harness On-Prem is installed in one VM, you can install it on other VMs using the command provided when you installed Harness:

To add worker nodes to this installation, run the following script on your other nodes
curl -sSL https://kurl.sh/harness/join.sh | sudo bash -s kubernetes-master-address=10.128.0.24:6443 kubeadm-token=xxxxx kubeadm-token-ca-hash=shaxxxxxx kubernetes-version=1.15.3 docker-registry-ip=10.96.3.130

Run this on each VM in your group. The installation will begins something like this:

...
Docker already exists on this machine so no docker install will be performed
Container already exists on this machine so no container install will be performed
The installer will use network interface 'ens4' (with IP address '10.128.0.44')
Loaded image: replicated/kurl-util:v2020.07.15-0
Loaded image: weaveworks/weave-kube:2.5.2
Loaded image: weaveworks/weave-npc:2.5.2
Loaded image: weaveworks/weaveexec:2.5.2
...

When installation is complete, you will see the worker join the cluster and preflight checks are performed:

⚙  Join Kubernetes node
+ kubeadm join --config /opt/replicated/kubeadm.conf --ignore-preflight-errors=all
[preflight] Running pre-flight checks
validated versions: 19.03.4. Latest
validated version: 18.09

The worker is now joined.

Important Next Steps

Important: You cannot invite other users to Harness until a Harness Delegate is installed and a Harness SMTP Collaboration Provider is configured.
  1. Install the Harness Delegate: Delegate Installation and Management.
  2. Set up an SMTP Collaboration Provider in Harness for email notifications from the Harness Manager: Add SMTP Collaboration Provider.
    Ensure you open the correct port for your SMTP provider, such as Office 365.
  3. Add a Harness Secrets Manager. By default, On-Prem installations use the local Harness MongoDB for the default Harness Secrets Manager. This is not recommended.After On-Prem installation, configure a new Secret Manager (Vault, AWS, etc). You will need to open your network for the Secret Manager connection.

Updating Harness

Please follow these steps to update your Harness On-Prem installation.

The steps are very similar to how you installed Harness initially.

For more information, see Updating an Embedded Cluster from KOTS.

Disconnected (Airgap)

The following steps require a private registry, just like the initial installation of Harness.

Upgrade Harness
  1. Download the latest release from Harness.
  2. Run the following command on the VM(s) hosting Harness, replacing the placeholders:
kubectl kots upstream upgrade harness \ 
--airgap-bundle <path to harness-<version>.airgap> \
--kotsadm-namespace harness-kots \
-n default
Upgrade Embedded Kubernetes Cluster and KOTS
  1. Download the latest version of Harness:
curl -SL -o harnesskurl.tar.gz https://kurl.sh/bundle/harness.tar.gz
  1. Move the tar.gz file to the disconnected VMs.
  2. On each VM, run the following command to update Harness:
tar xzvf harnesskurl.tar.gz
cat install.sh | sudo bash -s airgap

Connected

The following steps require a secure connection to the Internet, just like the initial installation of Harness.

Upgrade Harness
  1. Run the following command on the VMs hosting Harness:
kubectl kots upstream upgrade harness -n harness
Upgrade Embedded Kubernetes Cluster and KOTS
  1. Run the following command on the VMs hosting Harness:
curl -sSL https://kurl.sh/harness | sudo bash

Monitoring Harness

Harness monitoring is performed using the built in monitoring tools.

When you installed Harness, your were provided with connection information for Prometheus, Grafana, and Alertmanager ports and passwords:

The UIs of Prometheus, Grafana and Alertmanager have been exposed on NodePorts 30900, 30902 and 30903 respectively.
To access Grafana use the generated user:password of admin:RF1KuqreN .

For steps on using the monitoring tools, see Prometheus from KOTS.

License Expired

If your license has expired, you will see something like the following:

Contact your Harness Customer Success representative or support@harness.io.

Notes

Harness On-Prem installations do not currently support the Harness Helm Delegate.


How did we do?