On-Prem Embedded Cluster Setup

Updated 6 days ago by Michael Cretzman

This topic covers installing Harness Kubernetes On-Prem as an embedded Kubernetes cluster.

Installing Harness On-Prem into an embedded Kubernetes cluster is a simple process where you prepare your VMs and network, and use the Kubernetes installer kURL and the KOTS plugin to complete the installation and deploy Harness.

Once you have set up Harness on a VM, you can add additional worker nodes by simply running a command.

Harness On-Prem uses the open source Kubernetes installer kURL and the KOTS plugin for installation. See Install with kURL from kURL and Installing an Embedded Cluster from KOTS.

In this topic:

Step 1: Set up VM Requirements

Ensure that your VMs meet the requirements specified in Embedded On-Prem Infrastructure Requirements.

Different cloud platforms use different methods for grouping VMs (GCP instance groups, AWS target groups, etc). Set up your 3 VMs using the platform method that works best with the platform's networking processes.

Step 2: Set Up Load Balancer and Networking Requirements

Ensure that your networking meets the requirements specified in Embedded On-Prem Infrastructure Requirements.

You will need to have two load balancers, as described in the Embedded On-Prem Infrastructure Requirements.

One for routing traffic to the VMs and one for the in-cluster load balancer.

During installation, you are asked for the IP address of the in-cluster TCP load balancer first.

When you configure the Harness On-Prem application in the KOTS admin console, you are asked for the HTTP load balancer URL.

Step 3: Install Harness using kURL

Once you have your VMs and networking requirements set up, you can install Harness.

Log into one of your VMs, and then run the following command:

curl -sSL https://k8s.kurl.sh/harness | sudo bash -s ha

This will install the entire On-Prem Kubernetes cluster and all related microservices.

The -s ha parameter is used to set up high availability.

First, you are prompted to provide the IP address of the TCP Load Balancer for the cluster HA:

The installer will use network interface 'ens4' (with IP address '10.128.0.25')
Please enter a load balancer address to route external and internal traffic to the API servers.
In the absence of a load balancer address, all traffic will be routed to the first master.
Load balancer address:

This is the TCP load balancer you created in Embedded On-Prem Infrastructure Requirements.

For example, here is a GCP TCP load balancer with its frontend forwarding rule using port 6443:

Enter the IP address and port of your TCP load balancer (for example, 10.128.0.50:6443), and press Enter. The installation process will continue. The installation process begins like this:

...
Fetching weave-2.5.2.tar.gz
Fetching rook-1.0.4.tar.gz
Fetching contour-1.0.1.tar.gz
Fetching registry-2.7.1.tar.gz
Fetching prometheus-0.33.0.tar.gz
Fetching kotsadm-1.16.0.tar.gz
Fetching velero-1.2.0.tar.gz
Found pod network: 10.32.0.0/22
Found service network: 10.96.0.0/22
...

Once the installation process is complete, KOTS provides you with several configuration settings and commands. Save these settings and commands.

  • KOTS admin console and password:
Kotsadm: http://00.000.000.000:8800
Login with password (will not be shown again): D1rgBIu21
If you need to reset your password later, see kots reset-password.
  • Prometheus, Grafana, and Alertmanager ports and passwords:
The UIs of Prometheus, Grafana and Alertmanager have been exposed on NodePorts 30900, 30902 and 30903 respectively.
To access Grafana use the generated user:password of admin:RF1KuqreN .
  • kubectl access to your cluster:
To access the cluster with kubectl, reload your shell:
bash -l
  • The command to add worker nodes to the installation:
To add worker nodes to this installation, run the following script on your other nodes:

curl -sSL https://kurl.sh/harness/join.sh | sudo bash -s kubernetes-master-address=10.128.0.24:6443 kubeadm-token=xxxxx kubeadm-token-ca-hash=shaxxxxxx kubernetes-version=1.15.3 docker-registry-ip=10.96.3.130

We will use this command later.

  • Add master nodes:
To add MASTER nodes to this installation, run the following script on your other nodes
curl -sSL https://kurl.sh/harness/join.sh | sudo bash -s kubernetes-master-address=34.71.32.244:6443 kubeadm-to
ken=c2yack.q7lt3z6yuevqlmtf kubeadm-token-ca-hash=sha256:9db504ecdee08ff6dfa3b299ce95302fe53dd632a2e9356c55e9272db7
2d60d1 kubernetes-version=1.15.3 cert-key=f0373e812e0657b4f727e90a7286c5b65539dfe7ee5dc535df0a1bcf74ad5c57 control-
plane docker-registry-ip=10.96.2.100

In a browser, enter the Kotsadm link.

The browser displays a TLS warning.

Click Continue to Setup.

In the warning page, click Advanced, then click Proceed to continue to the admin console.

As KOTS uses a self-signed certification, but you can upload your own.

Upload your certificate or click Skip and continue.

Log into the console using the password provided in the installation output.

Step 4: Upload Your Harness License

Once you are logged into the KOTS admin console, you can upload your Harness license.

Obtain the Harness license file from your Harness Customer Success contact or email support@harness.io.

Drag your license YAML file into the KOTS admin tool:

Next, upload the license file:

Now that license file is uploaded, you can install Harness.

Option 1: Download Harness using Airgap Bundle

Installing Harness using an airgap bundle requires a Docker registry.

You provide the KOTS admin tool with your registry login information and the airgap bundle. KOTS expands the bundle, pushes it to your registry, and then pulls the bundle from the registry on behalf of the Kubernetes cluster.

The username and password you provide to the KOTS admin tool should have push and pull permissions on the registry.
To see if your license supports an airgap bundle installation, open it and look for isSnapshotSupported: true.
  1. Get airgap bundle from Harness.
  2. Drag the file into the Drag your airgap bundle here or choose a bundle to upload section.

  1. In Hostname, enter the repo hostname. For example, if the Docker login is docker login mycompany-harness.jfrog.io, then the hostname is mycompany-harness.jfrog.io.
  2. Enter the username and password for the registry log in. This account should have permissions to push and pull from the registry.
  3. In Registry Namespace, enter the namespace identified in your repository path. For example, in the following repo, the Repository Path is kots.
  4. Click Upload airgap bundle. Kots will unpack the bundle and upload it to your repo.

Next, you will provide KOTS with the Harness configuration information (Load Balancer URL and NodePort).

Option 2: Download Harness over the Internet

If you are installing Harness over the Internet, click the download Harness from the Internet link.

KOTS begins installing Harness into your cluster.

Next, you will provide KOTS with the Harness configuration information (Load Balancer URL and NodePort).

Step 5: Configure Harness

On-Prem Embedded Cluster requires that you provide a Load Balancer URL and NodePort.

  1. In Load Balancer URL, enter the URL for the HTTP load balancer you set up for routing external traffic to your VMs. Typically, this is the frontend IP address for the load balancer. For example, here is an HTTP load balancer in GCP and how you enter its information into Harness Configuration.
    If you have set up DNS to resolve a domain name to the load balancer IP, enter that domain name in Load Balancer URL.
  2. In NodePort, enter the port number you set up for load balancer backend: 7143.
  3. When you are done, click Continue.

Option: Advanced Configurations

In the Advanced Configurations section, there are a number of advanced settings you can configure. If this is the first time you are setting up Harness On-Prem, there's no reason to fine tune the installation with these settings.

You can change the settings later in the KOTS admin console's Config tab:

Step 6: Perform Preflight Checks

Preflight checks run automatically and verify that your setup meets the minimum requirements.

You can skip these checks, but we recommend you let them run.

Fix any issues in the preflight steps.

When you are finished pre-flight checks, click Continue.

The Harness application appears.

Step 7: Deploy Harness

In the KOTS admin console, in the Version history tab, click Deploy. The new version is displayed in Deployed version.

In a new browser tab, go to the following URL, replacing <LB_URL> with the URL you entered in the Load Balancer URL setting in the KOTS admin console:

<LB_URL>/#/onprem-signup

For example:

http://35.194.31.219/#/onprem-signup

The Harness sign up page appears.

Sign up with a new account and then log in. Your new account will be added to the Harness Account Administrators User Group.

See Managing Users and Groups (RBAC).

Step 8: Add Worker Nodes

Now that Harness On-Prem is installed in one VM, you can install it on other VMs using the command provided when you installed Harness:

To add worker nodes to this installation, run the following script on your other nodes
curl -sSL https://kurl.sh/harness/join.sh | sudo bash -s kubernetes-master-address=10.128.0.24:6443 kubeadm-token=xxxxx kubeadm-token-ca-hash=shaxxxxxx kubernetes-version=1.15.3 docker-registry-ip=10.96.3.130

Run this on each VM in your group. The installation will begins something like this:

...
Docker already exists on this machine so no docker install will be performed
Containerd already exists on this machine so no containerd install will be performed
The installer will use network interface 'ens4' (with IP address '10.128.0.44')
Loaded image: replicated/kurl-util:v2020.07.15-0
Loaded image: weaveworks/weave-kube:2.5.2
Loaded image: weaveworks/weave-npc:2.5.2
Loaded image: weaveworks/weaveexec:2.5.2
...

When installation is complete, you will see the worker join the cluster and preflight checks are performed:

⚙  Join Kubernetes node
+ kubeadm join --config /opt/replicated/kubeadm.conf --ignore-preflight-errors=all
[preflight] Running pre-flight checks
validated versions: 19.03.4. Latest
validated version: 18.09

The worker is now joined.

Important Next Steps

Important: You cannot invite other users to Harness until a Harness Delegate is installed and a Harness SMTP Collaboration Provider is configured.
  1. Install the Harness Delegate: Delegate Installation and Management.
  2. Set up an SMTP Collaboration Provider in Harness for email notifications from the Harness Manager: Add SMTP Collaboration Provider.
    Ensure you open the correct port for your SMTP provider, such as Office 365.
  3. Add a Harness Secrets Manager. By default, On-Prem installations use the local Harness MongoDB for the default Harness Secrets Manager. This is not recommended.After On-Prem installation, configure a new Secret Manager (Vault, AWS, etc). You will need to open your network for the Secret Manager connection.

Updating Harness

Updating Harness over the Internet follows the standard KOTS updating method described in Updating a KOTS application from KOTS. Please follow those steps to update your Harness On-Prem installation.

Updating an air gapped installation is the same as the original installation. Harness provides you with a new airgap bundle, which you drag into the Kots admin console.

Monitoring Harness

Harness monitoring is performed using the built in monitoring tools.

When you installed Harness, your were provided with connection information for Prometheus, Grafana, and Alertmanager ports and passwords:

The UIs of Prometheus, Grafana and Alertmanager have been exposed on NodePorts 30900, 30902 and 30903 respectively.
To access Grafana use the generated user:password of admin:RF1KuqreN .

For steps on using the monitoring tools, see Prometheus from KOTS.

License Expired

If your license has expired, you will see something like the following:

Contact your Harness Customer Success representative or support@harness.io.

Notes

Harness On-Prem installations do not currently support the Harness Helm Delegate.


How did we do?