Kubernetes Connected On-Prem Setup

Updated 1 month ago by Michael Cretzman

In addition to the Harness SaaS offering, you can run the Harness Continuous Delivery-as-a-Service platform on-premise, in a local Kubernetes cluster.

For more information, see the Harness Blog article: Harness uses Kubernetes for On-Premises Deployment.

This topic contains the following sections:

Pre-Installation

This section describes the Kubernetes On-Prem architecture and the requirements for all its components.

The next section, Installation, describes how to set up and test each component.

Architecture

Here is the high-level architecture of the installed Harness Connected On-Prem with Kubernetes:

The main components are:

  1. Kubernetes cluster hosting the Harness microservices for On-Prem.
  2. Docker repository hosting the Harness microservice images that will be pulled by the cluster.
  3. Harness Ambassador hosted on a Virtual Machine (VM). The Ambassador does the following:
    1. Makes a secure outbound connection to the Harness Cloud to download binaries and container images.
    2. Push container images to your Docker repository using Docker client.
    3. Installs images into the Kubernetes cluster using kubectl.
  4. Load Balancer that connects to the static node port of the Ingress controller in the Harness namespace (other options are discussed in Load Balancer Setup). You provide Harness with the internal URL used by the LB.
  5. Harness Cloud from where the Ambassador downloads the binaries and container images.

The steps in this document are simply the process of setting up and testing each component and then installing Harness On-Prem.

Let's get started by reviewing the requirements for the components.

On-Prem Requirements

This section lists the infrastructure requirements that must be in place before following the installation steps later in this document.

Kubernetes Cluster Requirements

The Kubernetes cluster must meet the following:

  • Kubernetes cluster (v1.9 to 1.15.x) with a dedicated namespace: harness.
    Instructions for the creating the namespace are below in Kubernetes Cluster and Namespace Setup.
  • 33.5 cores, 67.5 GB RAM, 910 GB disk storage (see table and storage volume list below).

Microservice

Pods

CPU

Memory (GB)

Manager

2

4

10

Verification

2

2

6

Machine Learning Engine

1

8

2

UI

2

0.5

0.5

MongoDB

3

12

24

Proxy

1

0.5

0.5

Ingress

2

0.5

0.5

TimescaleDB

3

6

24

Total

16

33.5

67.5

  • Storage volume (200 GB) for each Mongo pod (600 GB total).
  • Storage volume (100 GB) for each TimeScaleDB pod (300 GB total).
  • Storage volume (10 GB) for the Proxy pod.
Docker Repository Server Requirements

The server that will host the Docker repository must meet the following requirements. 

The instructions for adding the requirements are described in Docker Repo Setup and Ambassador Virtual Machine Setup.
  • Docker repository — Harness requires a Docker repo for the Harness microservice images that will be pulled by the Kubernetes cluster.
  • Docker user account — A Docker repo user account will be used by the Docker client on the Ambassador VM to access the Docker repo.

    It is also used to create a Kubernetes secret. This secret is used by the Ambassador when it runs kubectl commands remotely on the Kubernetes cluster and pulls images from the repo.
  • Important: The Docker repo user account should have permissions to read, write, and create new repositories on the Docker registry.

If the user lacks the permission to create new repositories, contact Harness Support to pre-create the repositories for the installation.

Harness Ambassador VM Requirements
If you are using the Ambassador Docker images from Harness, they have all of the prerequisites.

The Harness Ambassador VM must meet the following requirements. Please install the required software.

The instructions for testing the software requirements are described in Ambassador Virtual Machine Setup.

  • System — 1 core, 6GB RAM, 20 GB Disk.
  • Connectivity to the Harness Cloud — The Ambassador VM must be able to connect to hostname app.harness.io and port 443. The following command will check connectivity to the Harness Cloud:

    nc -vz app.harness.io 443

    • Optional: Proxy file-size limit — If you have a proxy set up in front of the Ambassador to reach app.harness.io, ensure that your proxy's configuration allows downloads of files as large as 2 GB. This is required to pull the artifacts that the Ambassador will download to install and upgrade Harness microservices. On Apache, set this limit using theLimitRequestBody directive. On nginx, use the client_max_body_size directive.
  • DockerInstall Docker on the Ambassador VM. The Docker client is used to log the Ambassador into the Docker repository containing the Harness images.
    Also, follow the Docker post install steps. Exit, log back on, and verify that you can run Docker using docker ps.
  • kubectl — The kubectl CLI must be installed on the Ambassador VM. kubectl is used to execute Kubernetes commands for the remote Kubernetes cluster harness namespace.
    Once you have created the Kubernetes cluster (described in Kubernetes Cluster and Namespace Setup), you will create a kubeconfig file using a script and copy it to the Ambassador VM (as described in Kubernetes Config File Setup). The kubeconfig file configures kubectl on the Ambassador VM to connect to the harness namespace in the remote Kubernetes cluster.
  • Linux user account — A dedicated user account used to run the Harness Ambassador. The user account does not need to have root access on localhost. It should have default user permissions.
  • cURL
  • Unzip
  • sed
Running a Container-based Ambassador — If you choose to run the Ambassador as a Docker image, root access on localhost is required. If root access is not possible, please follow the instructions on how to run Docker as a non-root user: https://docs.docker.com/install/linux/linux-postinstall/.

Networking and Access Requirements

The following connections must be enabled:

  • Load balancer to Kubernetes cluster.
  • The Kubernetes cluster needs network connectivity to the internal Docker repository to pull images.
  • Ambassador to Kubernetes cluster to run kubectl commands.
  • Ambassador to the Docker repository server to push Harness images.
  • Ambassador to domain app.harness.io (port: 443).
Load Balancer

The type of load balancer you use to support Harness On-Prem is your decision.

Ensure that the load balancer you configure supports the networking requirements described earlier.

The instructions for configuring the load balancer are described in Load Balancer Setup.

Installation

Once you have configured the requirements in Pre-Installation, use the instructions in this section to set up each of the Harness Kubernetes On-Prem components.

The instructions are organized so you don't have to jump back and forth between servers.

Docker Repo Setup

As described in Architecture, the Harness microservice images downloaded by the Ambassador are kept in a Docker repository in your environment.

This step describes how to set up the Docker repository for Harness On-Prem.

  1. Create a Docker repository using standard Docker setup commands as described in Docker's documentation.
Important — Record the Docker repository's internal URL. You will send this URL to Harness later, as well as use it to configure the Docker repo secret used by Harness microservices. 
  1. Next, create a Docker user for the Ambassador. Later, when you set up the Ambassador VM, you run docker login to log in as this user from the Ambassador.
    This command will cache the credentials on the Ambassador to use them for future connections.

The preceding steps are for generic Docker setup; however, you might be running Docker on a cloud platform. Please consult the cloud platform documentation for setting up Docker to meet the requirements stated above.

As an example of a cloud platform Docker setup, the next section shows the Docker setup for Google Container Registry (GCR).

Google Container Registry (GCR) Example

Here is an example of authenticating into Google Container Registry (GCR), adding its Docker credentials, and then verifying the Docker repo:

$ gcloud auth login

Response:

Your browser has been opened to visit:
https://accounts.google.com/o/oauth2/auth?...

You are now logged in as [jane.doe@harness.io].
Your current project is [qa-setup]. You can change this setting by running:

$ gcloud config set project PROJECT_ID

janedoe@jane-doe ~ % gcloud config set project playground-123
Updated property [core/project].


janedoe@jane-doe ~ % gcloud auth configure-docker


Adding credentials for all GCR repositories.


WARNING: A long list of credential helpers may cause delays running 'docker build'. We recommend passing the registry name to configure only the registry you are using.
After update, the following will be written to your Docker config file
located at [/Users/janedoe/.docker/config.json]:
{
"credHelpers": {
"gcr.io": "gcloud",
"marketplace.gcr.io": "gcloud",
"eu.gcr.io": "gcloud",
"us.gcr.io": "gcloud",
"staging-k8s.gcr.io": "gcloud",
"asia.gcr.io": "gcloud"
}
}

Do you want to continue (Y/n)? y

Docker configuration file updated.

docker pull busybox
Using default tag: latest
latest: Pulling from library/busybox
e2334dd9fee4: Pull complete
Digest: sha256:a8cf7ff6367c2afa2a90acd081b484cbded349a7076e7bdf37a05279f276bc12
Status: Downloaded newer image for busybox:latest

docker tag busybox us.gcr.io/playground-123/jane-test/busybox:tag1
docker push us.gcr.io/playground-123/jane-test/busybox
The push refers to a repository [us.gcr.io/playground-123/jane-test/busybox]
5b0d2d635df8: Pushed
tag1: digest: sha256:de8c8ce88a54ec933a7a7d350c3a33e8cadadf2801c9129a879b4efeb93ed863 size: 527

Kubernetes Cluster and Namespace Setup

This step describes how to set up your Kubernetes platform for Harness On-Prem.

When On-Prem is installed later, the Harness On-Prem microservices are pulled from the Docker repository you set up to the Kubernetes cluster where they will run.

In this step, we will use the Kubernetes YAML manifest file provided by Harness to create all necessary objects in Kubernetes for Harness, including:

  • A namespace named harness.
  • A Service account named harness-namespace-admin with all permissions within the harness namespace.
  • A Role called harness-namespace-admin-full-access that provides access to the required API groups (apiGroups).
  • A RoleBinding called harness-namespace-admin-view that connects the service account and the role that provides access on the required API groups.

Do the following:

  1. Download the harness-resources.yaml from Harness.
  2. Log into your Kubernetes platform as a user with cluster admin credentials.
  3. Run harness-resources.yaml with the following command:

    kubectl apply -f harness-resources.yaml

The output should look something like this:

namespace/harness created
serviceaccount/harness-namespace-admin created
role.rbac.authorization.k8s.io/harness-namespace-admin-full-access created
rolebinding.rbac.authorization.k8s.io/harness-namespace-admin-view created
clusterrole.rbac.authorization.k8s.io/harness-clusterrole created
serviceaccount/harness-serviceaccount created
clusterrolebinding.rbac.authorization.k8s.io/harness-clusterrole-hsa-binding created

Now that the harness namespace is created, we will create a secret in the namespace for the Ambassador to use later. The secret is stored in a kubeconfig file that you will copy to the Ambassador VM later.

Kubernetes Config File Setup

In this step, we create and test the kubeconfig file that the Ambassador will use later to access the Kubernetes namespace in the cluster you created.

Later, when you set up the Ambassador in Ambassador Virtual Machine Setup, you will copy this kubeconfig file to the Ambassador VM. The Harness Ambassador will use this file to execute kubectl commands on the remote Kubernetes cluster:

Ensure you have the jq command-line JSON processor installed before you execute this script.
  1. Download the generate_kubeconfig.sh file from Harness (right-click and save the file).
  2. Run the generate_kubeconfig.sh file you downloaded:

    bash ./generate_kubeconfig.sh

The output should look something like this example from GCP:

Creating target directory to hold files in ./kube/...done
Getting secret of service account harness-namespace-admin-harness
Secret name: harness-namespace-admin-token-<id>

Extracting ca.crt from secret...done
Getting user token from secret...done
Setting current context to: <your_cluster_name>
Cluster name: <your_cluster_name>

Endpoint: https://<IP_address>

Preparing k8s-harness-namespace-admin-harness-conf
Setting a cluster entry in kubeconfig...Cluster "<your_cluster_name>" set.
Setting token credentials entry in kubeconfig...User "harness-namespace-admin-harness-<your_cluster_name>" set.
Setting a context entry in kubeconfig...Context "harness-namespace-admin-harness-<your_cluster_name>" created.
Setting the current-context in the kubeconfig file...Switched to context "harness-namespace-admin-harness-<your_cluster_name>".

All done! Testing with:
KUBECONFIG=./kube/k8s-harness-namespace-admin-harness-conf kubectl get pods -n harness
No resources found in harness namespace.

When you execute this script it will create a folder called kube that contains a file named k8s-harness-namespace-admin-harness-conf.

Do not delete this file. You must copy this file onto the Ambassador VM at the location $HOME/.kube/config as described in Test Ambassador Connectivity and Access to Harness Namespace.

To verify the harness namespace on your Kubernetes cluster, and the kubeconfig file, run the following command. Replace <kube-config-file> with the path to the kubeconfig file created above:

kubectl get namespace harness --kubeconfig=<kube-config-file>

None of the above commands should return an error.

Docker Repo Secret Setup

In this step, you will create a secret in the harness namespace that Kubernetes will use to access the Docker repository containing the Harness microservices images.

This secret will be used by the Harness microservices installed later on in Install Harness.

  1. Obtain the Docker user account user name and password. You will add these to the script file you download and run.
  2. Ensure you are on the Kubernetes cluster you created.
  3. Download the secret.sh file from Harness (right-click and select Save) to create the secret.
  4. Open secret.sh in a code editor and replace the repository URL, user, password, and email placeholders with your information.

    Here is what the placeholders look like:

    REGISTRY_URL="<your-repo-url>"

    REGISTRY_USER="<user>"

    REGISTRY_PASS="<password>"

    REGISTRY_EMAIL="<email>"
  5. Execute the script.

The preceding steps are for a generic Kubernetes and Docker setup; however, you might be running them on a cloud platform. Please consult the cloud platform documentation for adding a secret to the Kubernetes namespace.

As an example of a cloud platform setup, the next section shows the Kubernetes commands to run for a Docker setup in Google Container Registry (GCR).

Alternative Example using GCR

If your Docker repository was created using GCR, here is the command you would use on the Kubernetes cluster to create the secret in the harness namespace:

$ kubectl create secret docker-registry regcred --docker-server=us.gcr.io --docker-username=_json_key --docker-password="<cat the path to the keyfile.json>" --docker-email=<email id>

secret/regcred created

Here's an example of running this command:

$ kubectl create secret docker-registry regcred --docker-server=us.gcr.io --docker-username=_json_key --docker-password="$(cat ~/keyfile.json)" --docker-email=jane.doe@harness.io

secret/regcred created

The response should look something like this:

kubectl describe secret regcred -n  harness

Name: regcred

Namespace: harness

Labels: <none>

Annotations: <none>

Type: kubernetes.io/dockerconfigjson​

Data

====

.dockerconfigjson: 5686 bytes

Option: Store Certificate Secret in Harness Namespace

If you would like TLS to terminate in the Harness Kubernetes cluster, you must provide your certificate as a secret in the cluster.

To perform this option, obtain the following:

  • The certificate in PEM (.pem) format. 
  • The private key in PEM format.
  • A secret.yaml file in the following format (note the placeholders):
apiVersion: v1
data:
tls.crt: _FULLCHAIN_
tls.key: _PRIVKEY_
kind: Secret
metadata:
name: harness-cert
namespace: harness
type: kubernetes.io/tls

Do the following:

  1. Replace _FULLCHAIN_ with the certificate in base64. You can use a command like this to convert to base64:

    cat fullchain.pem | base64
  2. Replace _PRIVKEY_ with the private key in base64:

    cat privkey.pem | base64
  3. Apply the secret.yaml file to the harness namespace using the command:

    kubectl apply secret.yaml

This will populate the cert as a secret and the Harness load balancer should pick it up and use it. Please let Harness know if you have any questions.

Now that the Docker and Kubernetes components are set up, you can set up the Harness Ambassador VM that manages Harness On-Prem installations.

Ambassador Virtual Machine Setup

The Harness Cloud uses the Ambassador to manage the installation of Harness On-Prem's Docker images and their set up in your Kubernetes harness namespace.

In this step we will test the connections from the Ambassador VM to the components you have set up, and then install the Ambassador.

Harness recommends not running the Harness Ambassador as root. Please create a dedicated user for it. If you are running the Harness Ambassador in Kubernetes or Docker, then you must run it as root.

  1. Ensure the Ambassador VM meets the requirements described in Harness Ambassador VM Requirements.
  2. Ensure the Ambassador is running on a dedicated VM.

Perform the steps in the following sections.

Test Outbound Connections

The Harness Ambassador VM requires outbound network connectivity to app.harness.io over port 443.

  1. Test outbound connectivity from the Ambassador machine:

    nc -vz app.harness.io 443

The nc command output should provide no errors and look like this:

Connection to app.harness.io port 443 [tcp/https] succeeded!

Ambassador Installation

Once you have verified the Ambassador VM meets the system and networking requirements, you can install and start the Ambassador.

  1. Extract Ambassador:
    1. Log into VM as your Harness dedicated user.
    2. Copy the Ambassador installer provided by Harness to the Ambassador VM.
    3. Extract the installer: tar -xvf harness-ambassador.tar.gz.
    4. If you need to configure proxy settings, run setup-proxy.sh.
  2. Start Ambassador:
    1. Open the extracted Ambassador folder.
    2. Run the start script: start_ambassador.sh.

Configure Docker Login for the Ambassador

The Ambassador uses a Docker user account to log into the Docker repository and run the Docker commands to add the Harness On-Prem images.

  1. Ensure you are logged into the Ambassador VM with the same account you used to install and run the Ambassador.
  2. Run the following command, replacing <repository_url> with the URL of your Docker repository:

    docker login <repository_url>

You are prompted for the Docker user account username and password. Enter these and verify the login was successful.

The preceding steps are for generic Docker setup; however, you might be running Docker on a cloud platform. Please consult the cloud platform documentation for remote login.

As an example of a cloud platform setup, the next section shows the gcloud commands to run for a Docker setup in Google Container Registry (GCR).

GCR Docker Login Example

If you have used GCR to create the Docker Repository, you can use the following examples to log into Docker repo.

First, here is an example of creating the key:

gcloud iam service-accounts keys create keyfile.json --iam-account packer-builder-username@playground.iam.gserviceaccount.com

The output should look something like this:

Response: created key [3ea7ce42d5fe26a453xssx5c9ef9e5dd40d25] of type [json] as [keyfile.json] for [packer-builder-username@playground.iam.gserviceaccount.com]

Next, we log in using the key:

cat keyfile.json | docker login -u _json_key --password-stdin https://us.gcr.io

After you run the above command, you should see the message login succeeded.

See Access token from GCP for Docker login info.

Verify Docker User Permissions

The Docker user configured on the Ambassador should have permissions to read, write, and create new repositories on the Docker registry. 

If the user lacks the permission to create new folders, contact Harness Support to pre-create the repositories for the installation.
  1. Check Write Permission by running these commands:

    docker pull nginx:latest (Pull from docker nginx repo )

    ​docker tag ngnix:latest<your_repo>/nginx:latest

    docker push <your_repo>/nginx:latest
  2. Check Read Permission:

    docker pull <your_repo>/nginx:latest

    docker pull us.gcr.io/playground-123/k8s-on-prem/nginx:latest

The output looks like this:

docker registry ls REPOSITORY[:TAG] [OPTIONS]

Test Ambassador Connectivity and Access to Harness Namespace

In this step, you will copy the kubeconfig file you created in Kubernetes Config File Setup to the Ambassador.

Next, you use the file to connect the Ambassador to the harness namespace in the Kubernetes cluster.

As noted in Harness Ambassador VM Requirements, the kubectl command interface should be installed on the Ambassador VM.

  1. On the machine where you created the Kubernetes harness namespace, copy the k8s-harness-namespace-admin-harness-conf file you created in Kubernetes Config File Setup.
  2. On the Ambassador, paste the file in this location: $HOME/.kube/config
  3. On the Ambassador, run the following command to ensure connectivity and access to the harness namespace in the cluster:

    kubectl get pods -n harness

The output should look something like this:

Response : No resources found in harness namespace.

Login was successful and the harness namespace is located.

Now that the Ambassador is set up, you can configure the load balancer for accepting traffic to the Kubernetes cluster.

Load Balancer Setup

You need to pick a method for sending traffic into the Harness Kubernetes On-Prem cluster that best suits your needs.

This step discusses the options available.

Cluster IP

Use this method if you have an Ingress controller in Kubernetes and a load balancer already pointing to it. This is a common method when Harness On-Prem customers have multiple applications in the same Kubernetes cluster.

You are responsible for creating an Ingress rule from your Ingress controller to the Harness Ingress controller.

The Harness DNS domain name you use ( for example, mycompany.harness.io) must resolve to your Ingress controller.

Load Balancer

Use this method if you do not have a load balancer or Ingress controller set up, and the Kubernetes cluster is running in a cloud provider that supports automatic load balancer creation (AMS, GCP, Azure). This is typical for new Kubernetes installations in the cloud.

You are responsible for providing Harness with a static IP used by the load balancer. In some cases, customers need to provide annotations for the automatic load balancer creation.

You will map the Harness DNS domain name you use to the static IP of the load balancer.

Node Port

Use this method if you have your own load balancer external to Kubernetes (F5, A10, etc).

You are responsible for providing Harness with a node port to use, and you must map your load balancer service to route traffic to Harness On-Prem.

The Harness DNS domain name you use must point to the external load balancer.

TLS Considerations

Harness recommends that all access to Harness On-Prem microservices be encrypted using TLS. All of the load balancing options mentioned above support TLS termination.

Here are the two TLS options:

  • Terminate TLS inside Harness On-Prem — In this configuration, you will provide Harness On-Prem with a certificate as a secret in your Kubernetes cluster. Harness On-Prem will use this certificate to encrypt all traffic to and from Harness On-Prem.
  • Terminate TLS outside of Harness On-Prem — In this configuration, Harness On-Prem will accept unencrypted traffic. It is your responsibility to terminate TLS up-stream from Harness On-Prem. Some unencrypted traffic from your load balancer to Harness is required (over port 80) in this scenario.

Avoid Common Issues

This step is a quick review and verification of certain critical configuration settings.

TLS and Harness Integration
  1. Review the TLS options discussed in TLS Considerations, and decide on an option.
  2. Test your TLS connections to ensure there are no issues with certificate verification.
Create Docker Login for Harness Ambassador
  1. Ensure that you have created a Docker user account on the Docker repo for the Harness Ambassador as described in Configure Docker Login for the Ambassador.
  2. Ensure that you test the login credentials and verify that the Ambassador VM account can log into Docker.
Grant Docker Permissions for Harness Ambassador
  1. Ensure that the Docker repository user account has permissions to read, write, and create new repositories on the Docker registry.

If the user account lacks the permission to create new folders in the repository, contact Harness Support to pre-create the repositories for the installation.

Create Secret in Kubernetes Cluster for Accessing the Docker Repo
  1. Ensure that you have created a secret in the harness namespace to access the Docker repository that contains the Harness microservices images. This is described in Docker Repo Secret Setup.
Create a Kubeconfig File and Place it on the Ambassador
  1. Ensure that you created the kubeconfig file as described in Kubernetes Config File Setup.
  2. Ensure that you copied the file to the Ambassador VM location: $HOME/.kube/config.

If you have not done so already, run the following command to ensure connectivity and access to the harness namespace in the cluster:

kubectl get pods -n harness

The output should look something like this:

 Response : No resources found in harness namespace.

Send Information to Harness

Once the infrastructure is prepared, Harness installation involves the following steps:

Email the following details to the Harness Team:

  • Repository URL — Internal URL used by the Ambassador and Kubernetes cluster. Include http(s):// scheme.
  • Load Balancer URL — Internal URL used by the Ambassador and Kubernetes cluster.  Include http(s):// scheme.
  • StorageClass — For example, standard, gp2, etc. Select a storage class used for pods that need permanent storage.
Best Practice — To avoid data loss in the event the Harness namespace or database PVCs are deleted, use a storage class with reclaimPolicy: Retain. If you don't have a storage class with that setting you can copy an existing storage class's YAML, set the reclaimPolicy, and give it a new name. This is the name you will provide to Harness.
  • Ingress controller type and port — Loadbalancer, NodePort, or ClusterIP, and port number.
  • SSL Termination endpoint — Please confirm SSL is terminated on your load balancer.
  • Internal Ambassador VM Hostname — Output of the command hostname. Used by Harness to identify the Ambassador in Harness records.
  • Internal Ambassador VM IP Address — Output of the command hostname -i. Used by Harness to identify the Ambassador in their records

Install Harness

Once you provide the above information, Harness Support will perform the installation. This process includes: 

  1. Validating the connectivity of the Ambassador to the Harness Cloud.
  2. The Harness Team runs pre-install checks.
  3. Next, the Harness Team triggers the installation of the On-Prem platform on your Kubernetes cluster, as configured.
  4. The Harness Team runs a post-install Workflow to verify that Harness On-Prem is working as expected.
  5. Notify you that the installation is complete.

Post-Installation

Once Harness Support has verified the installation, do the following:

  1. Log into the Harness Manager using the Admin setup link. Replace LOAD_BALANCER_URL with your URL:

    https://LOAD_BALANCER_URL/#/onprem-signup
  2. In the resulting signup form, set up the initial admin account by entering the requested details.

    All subsequent logins will go to the standard URL: https://LOAD_BALANCER_URL
  3. Log into the account using the load-balancer URL: https://LOAD_BALANCER_URL

The Harness On-Prem application should open successfully.

Important Next Steps

Important: You cannot invite other users to Harness until a Harness Delegate is installed and a Harness SMTP Collaboration Provider is configured.
  1. Install the Harness Delegate: Delegate Installation and Management.
  2. Set up an SMTP Collaboration Provider in Harness for email notifications from the Harness Manager: Add SMTP Collaboration Provider.
    Ensure you open the correct port for your SMTP provider, such as Office 365.
  3. Add a Harness Secrets Manager. By default, On-Prem installations use the local Harness MongoDB for the default Harness Secrets Manager. This is not recommended.

    After On-Prem installation, configure a new Secret Manager (Vault, AWS, etc). You will need to open your network for the Secret Manager connection.

Notes

Harness On-Prem installations do not currently support the Harness Helm Delegate.


How did we do?