Existing Cluster On-Prem Infrastructure Requirements

Updated 1 week ago by Michael Cretzman

This document lists the infrastructure requirements for installing Harness Kubernetes On-Prem in an existing Kubernetes cluster.

For requirements when installing the Kubernetes On-Prem cluster on VMs, see Embedded Kubernetes On-Prem Infrastructure Requirements.

In this topic:

Production Installation

Here are the requirements for each microservice.

Microservice

Pods

CPU / Pod

Memory / Pod

Total CPU

Total Memory

Manager

2

2

4

4

8

Verification

2

1

3

2

6

Machine Learning Engine

1

8

2

8

2

UI

2

0.25

0.25

0.5

0.5

MongoDB

3

4

8

12

24

Proxy

1

0.5

0.5

0.5

0.5

Ingress

2

0.25

0.25

0.5

0.5

TimescaleDB

3

2

8

6

24

KOTS Admin Pods

 

 

 

4

8

Total

 

 

 

37.5

73.5

The KOTS Admin Pods requirements are for a full stack. In an existing cluster, they will likely be lower.

Dev Installation

Here are the requirements for each microservice.

Microservice

Pods

CPU / Pod

Memory / Pod

Total CPU

Total Memory

Manager

1

2

4

2

4

Verification

1

1

3

1

3

Machine Learning Engine

1

3

2

3

2

UI

1

0.25

0.25

0.25

0.25

MongoDB

3

2

4

6

12

Proxy

1

0.5

0.5

0.5

0.5

Ingress

1

0.25

0.25

0.25

0.25

TimescaleDB

1

2

8

2

8

Kots Admin Pods

 

 

 

4

8

Total

 

 

 

19

38

Node Specifications: 8 cores vCPU, greater than 12 GB memory.

Storage Requirements

You should have a Kubernetes Storage Class attached to the Kubernetes cluster.

You need to provide the StorageClass name during installation.

Harness uses a total 1000 GB of space distributed as:

  • MongoDB - 3 pods * 200 GB each = 600 GB
  • Timescale DB - 3 pods * 120 GB each = 360 GB
  • Redis: 40 GB total
  • Note: For POC installation, total requirement will be: 200 GB total
  • MongoDB - 3 pods * 50 GB each = 150 GB
  • Timescale DB - 1 pods * 20 GB each = 20 GB
  • Redis: 30 GB total

Whitelist and Outbound Access Requirements

Whitelist the following URLs:

  • kots.io — Kots pulls the latest versions of the kubectl plugin and Kots admin console.
  • app.replicated.com — Kots admin console connects to check for the availability of releases according to your license
  • proxy.replicated.com — Proxy your registry to pull your private images.

Outbound access to the following URLs:

  • proxy.replicated.com​
  • replicated.app
  • k8s.kurl.sh​
  • app.replicated.com
The outbound access is required for a connected install only. If you have opted for Airgap mode, this is not required.

If your cluster does not have direct outbound connectivity and needs a proxy for outbound connections, use these instructions: https://docs.docker.com/network/proxy to set up a proxy on the node machines.

Cluster and Network Architecture

The following diagram illustrates the simple cluster and networking architecture for an On-Prem Existing Cluster setup.

The following sections go into greater detail.

Namespace Requirements

In the examples in all Harness On-Prem documentation, we use namespace named harness.

If can use a different namespace, ensure that you update any spec samples provided by Harness.

Load Balancer

You need to set up a Load Balancer before installing Harness On-Prem.

During installation, you will provide the Load Balancer URL in the KOTS admin console.

After On-Prem is installed, the load balancer is used to access the Harness Manager UI using a web browser.

Follow the steps on creating the load balancer as part of the process described in Kubernetes On-Prem Existing Cluster Setup.

RBAC Requirements

You need to add the a Role to the cluster for the KOTS plugin.

If the user account you will use to install KOTS already has permissions to create roles, then this KOTS plugin role is created automatically when you install KOTS (via kubectl kots install). For this topic, we will walk you through creating the role.

The kotsadm-operator-role Role below defines the permissions required by the kubectl operator:

apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
creationTimestamp: null
labels:
kots.io/kotsadm: "true"
velero.io/exclude-from-backup: "true"
name: kotsadm-operator-role
namespace: harness
rules:
- apiGroups:
- '*'
resources:
- '*'
verbs:
- '*'
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
name: harness-clusterrole
rules:
- apiGroups:
- ""
resources:
- configmaps
- endpoints
- nodes
- pods
- secrets
verbs:
- list
- watch
- apiGroups:
- ""
resources:
- nodes
verbs:
- get
- apiGroups:
- ""
resources:
- services
verbs:
- get
- list
- watch
- apiGroups:
- "extensions"
resources:
- ingresses
verbs:
- get
- list
- watch
- apiGroups:
- ""
resources:
- events
verbs:
- create
- patch
- apiGroups:
- "extensions"
resources:
- ingresses/status
verbs:
- update

Note that the namespace used here is harness. If you are using a different namespace, replace harness with it.

Add this role and bind it to the user account that will be installing Harness.

For example, if you add the above role to a file name kotsadm-operator-role.yaml, log into your cluster and run the following command:

kubectl apply -f kotsadm-operator-role.yaml

The output will be something like this:

role.rbac.authorization.k8s.io/kotsadm-operator-role created

Next, define a role binding to bind that kotsadm-operator-role to the user that will install KOTS and Harness:

kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: kots-user-binding
namespace: harness
subjects:
- kind: User
name: john.smith@mycompany.com
apiGroup: ""
roleRef:
kind: Role
name: kotsadm-operator-role
apiGroup: ""

Add the role to a file, such as kotsUserRoleBinding.yaml.

To create the role binding, run the following command:

kubectl apply -f kotsUserRoleBinding.yaml

The output will be something like this:

rolebinding.rbac.authorization.k8s.io/kots-user-binding created

Restrictions or Stipulations

Some users' Kubernetes implementations restrict the range of users that a pod can have.

Harness images run as root user. There should be no restrictions on runAsUser for the harness namespace in the cluster.

Install Harness On-Prem

Now that you have set up the requirements, proceed with installation in On-Prem Existing Kubernetes Cluster Setup.


How did we do?