Add Cloud Providers

In this article:

You add cloud providers to your Harness Account and then reference them when defining deployment environments.

To Add a Cloud Provider

To add a cloud provider to your Harness account, do the following:

  1. Click Setup, and then click Cloud Providers.
  2. Click Add Cloud Provider. The Cloud Provider dialog appears.
  3. In Type, select the type of cloud provider you want to add. The dialog settings will change for each cloud provider.
  4. In Display Name, enter the name that you will use to refer to this Cloud Provider when setting up your Harness Application, such as GCP Cloud. You will use the Display Name when setting up Harness Environments, Service Infrastructures, and other settings.
  5. Enter the cloud provider account information, and then click SUBMIT. Once you have created Harness applications and environments, you can return to this dialog and add Usage Scope on which applications and environments may use this cloud provider account.

Below you will find account details and Harness permission requirements for the different providers.

Please ensure that the account used to add the cloud provider to Harness has the permission requirements listed below.

Kubernetes Cluster

When the Kubernetes cluster is created, you specify the authentication methods for the cluster. You use these methods to enable Harness to connect to the cluster as a cloud provider for the deployment service infrastructure.

For more information, see Authenticating from Kubernetes.

For more information about Kubernetes and Harness, see Kubernetes and Harness FAQ.

To add a Kubernetes Cluster as a Harness Cloud Provider, do the following:

  1. Click Setup, and then click Cloud Providers.
  2. Click Add Cloud Provider. The Cloud Provider dialog appears.
  3. When you choose Kubernetes Cluster in Type, the cloud provider dialog changes for the Kubernetes Cluster settings.

The authentication strategy for the Kubernetes cluster is used to fill in the dialog.

The Kubernetes Cluster dialog supports four authentication strategies. Fill out the dialog using only those fields that apply to your authentication strategy:

  1. Inherit Cluster Details from selected Delegate. Use this option if you installed the Harness delegate in your cluster. This is the common method for Amazon EKS.
  2. Username and password.
  3. CA certificate, client certificate, and client key. Key passphrase and key algorithm are optional.
  4. Kubernetes service account token.

The Kubernetes Cluster dialog has the following fields.

Field

Description

Inherit Cluster Details from selected Delegate

Select this option if the Kubernetes cluster is the same cluster where the Harness delegate was installed. This is the common method for Amazon EKS. If you use this option, the service account you use must have the Kubernetes cluster-admin role.

Select the tag of the Delegate. For information on adding tags to Delegates, see Delegate Installation.

In the case of some Kubernetes providers, such as OpenShift, the delegate should be installed outside of the cluster.

Master URL

The Kubernetes master node URL.

The easiest method to obtain the master URL is using kubectl:

kubectl cluster-info

User Name / Password

Username and password for the Kubernetes cluster. For example, admin or john@example.com, and a Basic authentication password.

CA Certificate

Paste in the Certificate authority root certificate used to validate client certificates presented to the API server. For more information, see Authenticating from Kubernetes.

Client Certificate

Paste in the client certificate for the cluster. The client certificate may be used in conjunction with or in place of Basic authentication. The public client certificate is generated along with the private client key used to authenticate.

The certificate can be pasted in either Base64 encoded or decoded.

Client Key

Paste in the client key for the client certificate.

The certificate can be pasted in either Base64 encoded or decoded.

Client Key Passphrase

Paste in the client key passphrase.

The certificate can be pasted in either Base64 encoded or decoded.

Client Key Algorithm

Specify the encryption algorithm used when the certificate was created. Typically, RSA.

Kubernetes Service Account Token

Paste in the service account token for the service account. The token must be pasted in decoded.

The following shell script is a quick method for obtaining the service account token. Run this script wherever you run kubectl to access the cluster.

Set the SERVICE_ACCOUNT_NAME and NAMESPACE values to the values in your infrastructure.

SERVICE_ACCOUNT_NAME=default
NAMESPACE=mynamepace
SECRET_NAME=$(kubectl get sa "${SERVICE_ACCOUNT_NAME}" --namespace "${NAMESPACE}" -o json | jq -r '.secrets[].name')
TOKEN=$(kubectl get secret "${SECRET_NAME}" --namespace "${NAMESPACE}" -o json | jq -r '.data["token"]' | base64 -D)
echo $TOKEN

Skip Validation

Enable this option during creation of this cloud provider only. When you create a service infrastructure as part of your Harness setup, Harness will need to validate.

Until Harness has a specific namespace for the Kubernetes cluster, it tries to validate in the default namespace.

If you have a cluster without a default namespace, or the credentials entered in this dialog do not have permission in the default namespace, you can enable validation initially.

Credential Validation

When you click SUBMIT, Harness uses the provided credentials to list controllers in the default namespace in order to validate the credentials. If validation fails, Harness does not save the cloud provider and the SUBMIT fails.

If your cluster does not have a default namespace, or your credentials do not have permission in the default namespace, then you can check Skip Validation to skip this check and saving your cloud provider settings. You do not need to come back and uncheck Skip Validation.

Later, when you later create a service infrastructure using this cloud provider, you will also specify a specific namespace. Harness uses this namespace rather than the default namespace.

When Harness saves the service infrastructure it performs validation even if Skip Validation was checked.

Permissions Required

A Kubernetes service account with permission to create entities in the target namespace.

The set of permissions should include list, get, create, and delete permissions for each of the entity types Harness uses. In general, cluster admin permission or namespace admin permission is sufficient.

When you use the Same cluster as Kubernetes delegate option (an in-cluster delegate) or the Kubernetes Service Account Token setting, Kubernetes RBAC applies. The service account you use must have the Kubernetes cluster-admin role. For more information, see User-Facing Roles from Kubernetes.

For more information about Kubernetes and Harness, see Kubernetes and Harness FAQ.

Amazon AWS EKS Support

AWS EKS is supported using the Same cluster as Kubernetes delegate option in the Kubernetes Cluster cloud provider settings.

The process is as follows:

  1. Install a Harness Kubernetes delegate in your EKS cluster. Give it a name that you can recognize as an EKS cluster delegate. For information on installing a Kubernetes delegate, see Delegate Installation.
  2. Add a tag to the Delegate in Harness. To learn more about Delegate tags, see Delegate Installation.
  3. Add EKS as a Harness Cloud Provider using the Kubernetes Cluster type. In the Cloud Provider dialog, do the following:
  4. In Type, click Kubernetes Cluster.
  5. Click the Inherit Cluster Details from selected Delegate checkbox to enable it.
  6. In Delegate Name, select the Kubernetes Delegate you installed in the EKS cluster. You are selecting the Delegate tag name.
  7. Click SUBMIT.

When setting up the EKS cluster as a service infrastructure in a Harness environment, you simply select the EKS cloud provider you added to Harness in Cloud Provider. For example:

Using the EKS-based environment in a workflow is no different than using any Kubernetes cluster. You simply select the Environment as part of setting up the workflow.

OpenShift Support

OpenShift is supported through an external delegate installation (shell script installation of the delegate outside of the Kubernetes cluster) and a service account token, entered in the Kubernetes Service Account Token field. You only need to use the Master URL and Kubernetes Service Account Token fields in the Kubernetes Cloud Provider dialog.

The following shell script is a quick method for obtaining the service account token. Run this script wherever you run kubectl to access the cluster.

Set the SERVICE_ACCOUNT_NAME and NAMESPACE values to the values in your infrastructure.

SERVICE_ACCOUNT_NAME=default
NAMESPACE=mynamepace
SECRET_NAME=$(kubectl get sa "${SERVICE_ACCOUNT_NAME}" --namespace "${NAMESPACE}" -o json | jq -r '.secrets[].name')
TOKEN=$(kubectl get secret "${SECRET_NAME}" --namespace "${NAMESPACE}" -o json | jq -r '.data["token"]' | base64 -D)
echo $TOKEN

Once configured, OpenShift is used by Harness as a typical Kubernetes cluster.

OpenShift Notes
  • The token does not need to have global read permissions. The token can be scoped to the namespace.
  • The Kubernetes containers must be OpenShift-compatible containers. If you are already using OpenShift, then this is already configured. But be aware that OpenShift cannot simply deploy any Kubernetes container. You can get OpenShift images from the following public repos: https://hub.docker.com/u/openshift and https://access.redhat.com/containers.

For information on installing the delegate, see Delegate Installation.

Google Cloud Platform (GCP)

To add GCP as a Harness Cloud Provider, do the following:

  1. Click Setup, and then click Cloud Providers.
  2. Click Add Cloud Provider. The Cloud Provider dialog appears.
  3. When you choose Google Cloud Platform in Type, the Cloud Provider dialog changes for the GCP settings.

To obtain the Google Cloud's Account Service Key File, in GCP, do the following:

  1. Open the IAM & Admin page in the GCP Console.
  2. Select your project and click OPEN.
  3. In the left nav, click Service accounts.
  4. Look for the service account for which you wish to create a key, and click the service account name.
  5. In Service account details, click EDIT.
  6. Click CREATE KEY.
  7. Select a Key type and click Create. For more information, see Creating and Managing Service Account Keys from GCP.
  8. Upload the key file into the Harness Cloud Provider dialog.
  9. The Google Cloud Account Name field is automatically populated with the account name, but you can enter a new name.
  10. Click SUBMIT. The GCP cloud provider is added.

GCP Permissions Required

The GCP service account requires Kubernetes Engine Admin (GKE Admin) role to get the Kubernetes master username and password. Harness also requires Storage Object Viewer permissions.

For steps to add roles to your service account, see Granting Roles to Service Accounts from Google.

For more information, see Understanding Roles from GCP.

Amazon Web Services (AWS) Cloud

AWS is used as a Harness Cloud Provider for obtaining artifacts, deploying services, and for verifying deployments using CloudWatch.

To add AWS as a Harness Cloud Provider, do the following:

  1. Click Setup, and then click Cloud Providers.
  2. Click Add Cloud Provider. The Cloud Provider dialog appears.
  3. When you select Amazon Web Services in Type, the Cloud Provider dialog changes for the AWS settings.

Enter your Access Key and your Secret Key. For more information, see Access Keys (Access Key ID and Secret Access Key) from AWS.

Choose a name for this provider. The name is to differentiate AWS providers in Harness. It is not the actual AWS account name.

AWS Permissions

User: Harness requires the IAM user be able to make API requests to AWS. For more information, see Creating an IAM User in Your AWS Account from AWS.

User Access Type: Programmatic access. This enables an access key ID and secret access key for the AWS API, CLI, SDK, and other development tools.

ECR

Policy Name:AmazonEC2ContainerRegistryReadOnly.

Policy ARN: arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryReadOnly.

Description: Provides read-only access to Amazon EC2 Container Registry repositories.

Policy JSON:

{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"ecr:GetAuthorizationToken",
"ecr:BatchCheckLayerAvailability",
"ecr:GetDownloadUrlForLayer",
"ecr:GetRepositoryPolicy",
"ecr:DescribeRepositories",
"ecr:ListImages",
"ecr:DescribeImages",
"ecr:BatchGetImage"
],
"Resource": "*"
}
]
}

Amazon S3

Policy Name: AmazonS3ReadOnlyAccess.

Policy ARN: arn:aws:iam::aws:policy/AmazonS3ReadOnlyAccess.

Description: Provides read-only access to all buckets via the AWS Management Console.

Policy JSON:

{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:Get*",
"s3:List*"
],
"Resource": "*"
}
]
}

ECS (Existing Cluster)

There are two policies required. The Managed Policy AmazonEC2ContainerServiceFullAccess from AWS and the Customer Managed Policy you create using Application Auto Scaling.

Policy Name: AmazonEC2ContainerServiceFullAccess.

Policy ARN: arn:aws:iam::aws:policy/AmazonEC2ContainerServiceFullAccess.

Description: Provides administrative access to Amazon ECS resources.

Policy JSON:

{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"autoscaling:Describe*",
"autoscaling:UpdateAutoScalingGroup",
"cloudformation:CreateStack",
"cloudformation:DeleteStack",
"cloudformation:DescribeStack*",
"cloudformation:UpdateStack",
"cloudwatch:GetMetricStatistics",
"ec2:Describe*",
"elasticloadbalancing:*",
"ecs:*",
"events:DescribeRule",
"events:DeleteRule",
"events:ListRuleNamesByTarget",
"events:ListTargetsByRule",
"events:PutRule",
"events:PutTargets",
"events:RemoveTargets",
"iam:ListInstanceProfiles",
"iam:ListRoles",
"iam:PassRole"
],
"Resource": "*"
}
]
}

Policy Name: This is a customer-managed policy you must create. In this example we have named it ApplicationAutoScaling. For more information, see AWS Auto Scaling from Amazon.

Policy ARN: arn:aws:iam::37745738389:policy/ApplicationAutoScaling.

Description: Full Application Auto Scaling access.

Policy JSON:

{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": "application-autoscaling:*",
"Resource": "*"
}
]
}

Notes

  • Due to an AWS limitation, Harness is unable to limit the three actions for ECS to Create, Update, and DeleteService for just a specific cluster/resource. This limitation is why we require Resource *.
  • ECS with Public Docker Registry: Both ECS permissions are required.
  • ECS with Private Docker Registry: Both ECS permissions are required. Also, the Docker agent on the container host should be configured to authenticate with the private registry. Please refer to AWS documentation here.
  • ECS with ECR: For ECS and ECR, both sections permissions are required.
  • ECS with GCR: This is currently not supported.

AWS CodeDeploy

There are two policies required: AWSCodeDeployRole and AWSCodeDeployDeployerAccess.

Policy Name: AWSCodeDeployRole.

Policy ARN: arn:aws:iam::aws:policy/service-role/AWSCodeDeployRole.

Description: Provides CodeDeploy service access to expand tags and interact with Auto Scaling on your behalf.

Policy JSON:

{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"autoscaling:CompleteLifecycleAction",
"autoscaling:DeleteLifecycleHook",
"autoscaling:DescribeAutoScalingGroups",
"autoscaling:DescribeLifecycleHooks",
"autoscaling:PutLifecycleHook",
"autoscaling:RecordLifecycleActionHeartbeat",
"autoscaling:CreateAutoScalingGroup",
"autoscaling:UpdateAutoScalingGroup",
"autoscaling:EnableMetricsCollection",
"autoscaling:DescribeAutoScalingGroups",
"autoscaling:DescribePolicies",
"autoscaling:DescribeScheduledActions",
"autoscaling:DescribeNotificationConfigurations",
"autoscaling:DescribeLifecycleHooks",
"autoscaling:SuspendProcesses",
"autoscaling:ResumeProcesses",
"autoscaling:AttachLoadBalancers",
"autoscaling:PutScalingPolicy",
"autoscaling:PutScheduledUpdateGroupAction",
"autoscaling:PutNotificationConfiguration",
"autoscaling:PutLifecycleHook",
"autoscaling:DescribeScalingActivities",
"autoscaling:DeleteAutoScalingGroup",
"ec2:DescribeInstances",
"ec2:DescribeInstanceStatus",
"ec2:TerminateInstances",
"tag:GetTags",
"tag:GetResources",
"sns:Publish",
"cloudwatch:DescribeAlarms",
"elasticloadbalancing:DescribeLoadBalancers",
"elasticloadbalancing:DescribeInstanceHealth",
"elasticloadbalancing:RegisterInstancesWithLoadBalancer",
"elasticloadbalancing:DeregisterInstancesFromLoadBalancer",
"elasticloadbalancing:DescribeTargetGroups",
"elasticloadbalancing:DescribeTargetHealth",
"elasticloadbalancing:RegisterTargets",
"elasticloadbalancing:DeregisterTargets"
],
"Resource": "*"
}
]
}

Policy Name: AWSCodeDeployDeployerAccess.

Policy ARN: arn:aws:iam::aws:policy/AWSCodeDeployDeployerAccess.

Description: Provides access to register and deploy a revision.

Policy JSON:

{
"Version": "2012-10-17",
"Statement": [
{
"Action": [
"codedeploy:Batch*",
"codedeploy:CreateDeployment",
"codedeploy:Get*",
"codedeploy:List*",
"codedeploy:RegisterApplicationRevision"
],
"Effect": "Allow",
"Resource": "*"
}
]
}

ELB, ALB, and ECS

The following policy is for Elastic Load Balancer, Application Load Balancer, and Elastic Container Service.

Policy Name: AmazonEC2ContainerServiceRole.

Policy ARN: arn:aws:iam::aws:policy/service-role/AmazonEC2ContainerServiceRole.

Description: Default policy for Amazon ECS service role.

Notes: Make sure the Trust Relationship is properly set for the role. Refer to AWS documentation here.

Policy JSON:

{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"ec2:AuthorizeSecurityGroupIngress",
"ec2:Describe*",
"elasticloadbalancing:DeregisterInstancesFromLoadBalancer",
"elasticloadbalancing:DeregisterTargets",
"elasticloadbalancing:Describe*",
"elasticloadbalancing:RegisterInstancesWithLoadBalancer",
"elasticloadbalancing:RegisterTargets"
],
"Resource": "*"
}
]
}

AWS EC2

Provisioned and Static Hosts

Policy Name: AmazonEC2FullAccess.

Policy ARN: arn:aws:iam::aws:policy/AmazonEC2FullAccess.

Description: Provides full access to Amazon EC2 via the AWS Management Console.

Policy JSON:

{
"Version": "2012-10-17",
"Statement": [
{
"Action": "ec2:*",
"Effect": "Allow",
"Resource": "*"
},
{
"Effect": "Allow",
"Action": "elasticloadbalancing:*",
"Resource": "*"
},
{
"Effect": "Allow",
"Action": "cloudwatch:*",
"Resource": "*"
},
{
"Effect": "Allow",
"Action": "autoscaling:*",
"Resource": "*"
}
]
}
Trusted entities

Newly created roles under Amazon EC2 have trusted entities listed as ec2.amazonaws.com. For ECS, this needs to be updated with ecs.amazonaws.com. See the AWS documentation at Amazon ECS Service Scheduler IAM Role.

Microsoft Azure

When you choose Microsoft Azure in Type, the Cloud Provider dialog changes for the Azure settings.

You can find the information you need in the App registration Settings page:

The Microsoft Azure dialog has the following fields.

Field

Description

Client ID

This is the Client/Application ID for the Azure app registration you are using. It is found in the Azure Active Directory App registrations. For more information, see Quickstart: Register an app with the Azure Active Directory v1.0 endpoint from Microsoft.

To access resources in your Azure subscription, you must assign the Azure App registration using this Client ID to a role in that subscription. Later, when you set up an Azure service infrastructure in a Harness environment, you will select a subscription. If the Azure App registration using this Client ID is not assigned a role in a subscription, no subscriptions will be available.

For more information, see Assign the application to a role from Microsoft.

Tenant ID

The Tenant ID is the ID of the Azure Active Directory (AAD) in which you created your application. This is also called the Directory ID. For more information, see Get tenant ID from Azure.

Key

Authentication key for your application. For more information, see Get application ID and authentication key from Azure.

Pivotal Cloud Foundry (PCF)

When you choose PCF in Type, the Cloud Provider dialog changes for the PCF settings.

The PCF dialog has the following fields.

Field

Description

Endpoint URL

Enter the API endpoint URL, without URL scheme. For example, api.run.pivotal.io. Omit http://.

For more information, see Identifying the API Endpoint for your PAS Instance from Pivotal.

Username / Password

Username and password for the PCF account to use for this connection.

Pivotal Cloud Foundry Name

Provide a unique name for this connection. You will use this name to select this connection when creating a service infrastructure for your deployment.

Usage Scope

If you want to restrict the use of a provider to specific applications and environments, do the following:

In Usage Scope, click the drop-down under Applications, and click the name of the application.

In Environments, click the name of the environment.

PCF Permissions

PCF user account with Admin, Org Manager, or Space Manager role. The user account must be able to update spaces, orgs, and applications.

For more information, see Orgs, Spaces, Roles, and Permissions from Pivotal.


How did we do?