Add Cloud Providers

Cloud Providers represent the infrastructure of your applications. Typically, a Cloud Provider is mapped to a Kubernetes cluster, AWS account, Google service account, Azure subscription, or a data center.

In this article:

You add cloud providers to your Harness Account and then reference them when defining deployment environments.

To Add a Cloud Provider

To add a cloud provider to your Harness account, do the following:

  1. Click Setup, and then click Cloud Providers.
  2. Click Add Cloud Provider. The Cloud Provider dialog appears.

  1. In Type, select the type of cloud provider you want to add. The dialog settings will change for each cloud provider.
  2. In Display Name, enter the name that you will use to refer to this Cloud Provider when setting up your Harness Application, such as GCP Cloud. You will use the Display Name when setting up Harness Environments, Service Infrastructures, and other settings.
  3. Enter the cloud provider account information, and then click SUBMIT. Once you have created Harness applications and environments, you can return to this dialog and add Usage Scope on which applications and environments may use this cloud provider account.

Below you will find account details and Harness permission requirements for the different providers.

Please ensure that the account used to add the Cloud Provider to Harness has the permission requirements listed below.

Kubernetes Cluster

When the Kubernetes cluster is created, you specify the authentication methods for the cluster. You can use these methods to enable Harness to connect to the cluster as a Cloud Provider for the deployment service infrastructure. For more information, see Authenticating from Kubernetes.

Recommended: Install and run the Harness Kubernetes Delegate in the GKE Kubernetes cluster, and then use the Kubernetes Cluster Cloud Provider to connect to that cluster using the Harness Kubernetes Delegate you installed. This is the easiest method to connect to a Kubernetes cluster. For more information, see Installation Example: Google Cloud Platform.

To add a Kubernetes Cluster as a Harness Cloud Provider, do the following:

  1. Click Setup, and then click Cloud Providers.
  2. Click Add Cloud Provider. The Cloud Provider dialog appears.
  3. When you choose Kubernetes Cluster in Type, the cloud provider dialog changes for the Kubernetes Cluster settings.

The authentication strategy for the Kubernetes cluster is used to fill in the dialog.

What does the Test button do? The Test button tests the credentials to ensure that the Harness Delegate can authenticate with the Kubernetes cluster. The Harness Delegate performs all Harness operations in the cluster.

The Kubernetes Cluster dialog supports four authentication strategies. Fill out the dialog using only those fields that apply to your authentication strategy:

  1. Inherit Cluster Details from selected Delegate. Use this option if you installed the Harness delegate in your cluster. This is the common method for Amazon EKS.
  2. Username and password.
  3. CA certificate, client certificate, and client key. Key passphrase and key algorithm are optional.
  4. Kubernetes service account token.

The Kubernetes Cluster dialog has the following fields.

Field

Description

Inherit Cluster Details from selected Delegate

Recommended. Select this option if the Kubernetes cluster is the same cluster where the Harness delegate was installed. This is the common method for Amazon EKS. If you use this option, the service account you use must have the Kubernetes cluster-admin role.Select the tag of the Delegate. For information on adding tags to Delegates, see Delegate Installation.In the case of some Kubernetes providers, such as OpenShift, the delegate should be installed outside of the cluster.

Master URL

The Kubernetes master node URL.The easiest method to obtain the master URL is using kubectl:

kubectl cluster-info

User Name / Password

Username and password for the Kubernetes cluster. For example, admin or john@example.com, and a Basic authentication password.

CA Certificate

Paste in the Certificate authority root certificate used to validate client certificates presented to the API server. For more information, see Authenticating from Kubernetes.

Client Certificate

Paste in the client certificate for the cluster. The client certificate may be used in conjunction with or in place of Basic authentication. The public client certificate is generated along with the private client key used to authenticate.The certificate can be pasted in either Base64 encoded or decoded.

Client Key

Paste in the client key for the client certificate. The key can be pasted in either Base64 encoded or decoded.

Client Key Passphrase

Paste in the client key passphrase. The passphrase can be pasted in either Base64 encoded or decoded.

Client Key Algorithm

Specify the encryption algorithm used when the certificate was created. Typically, RSA.

Kubernetes Service Account Token

Paste in the service account token for the service account. The token must be pasted in decoded.The following shell script is a quick method for obtaining the service account token. Run this script wherever you run kubectl to access the cluster.

Set the SERVICE_ACCOUNT_NAME and NAMESPACE values to the values in your infrastructure.

SERVICE_ACCOUNT_NAME=default
NAMESPACE=mynamepace
SECRET_NAME=$(kubectl get sa "${SERVICE_ACCOUNT_NAME}" --namespace "${NAMESPACE}" -o json | jq -r '.secrets[].name')
TOKEN=$(kubectl get secret "${SECRET_NAME}" --namespace "${NAMESPACE}" -o json | jq -r '.data["token"]' | base64 -D)
echo $TOKEN

Skip Validation

Enable this option during creation of this cloud provider only. When you create a service infrastructure as part of your Harness setup, Harness will need to validate.Until Harness has a specific namespace for the Kubernetes cluster, it tries to validate in the default namespace.If you have a cluster without a default namespace, or the credentials entered in this dialog do not have permission in the default namespace, you can enable validation initially.

Credential Validation

When you click SUBMIT, Harness uses the provided credentials to list controllers in the default namespace in order to validate the credentials. If validation fails, Harness does not save the cloud provider and the SUBMIT fails.

If your cluster does not have a default namespace, or your credentials do not have permission in the default namespace, then you can check Skip Validation to skip this check and saving your cloud provider settings. You do not need to come back and uncheck Skip Validation.

Later, when you later create a service infrastructure using this cloud provider, you will also specify a specific namespace. Harness uses this namespace rather than the default namespace.

When Harness saves the service infrastructure it performs validation even if Skip Validation was checked.

Permissions Required

A Kubernetes service account with permission to create entities in the target namespace.

The set of permissions should include list, get, create, and delete permissions for each of the entity types Harness uses. In general, cluster admin permission or namespace admin permission is sufficient.

When you use the Same cluster as Kubernetes delegate option (an in-cluster delegate) or the Kubernetes Service Account Token setting, Kubernetes RBAC applies. The service account you use must have the Kubernetes cluster-admin role. For more information, see User-Facing Roles from Kubernetes.

For more information about Kubernetes and Harness, see Kubernetes and Harness FAQ.

Amazon AWS EKS Support

AWS EKS is supported using the Same cluster as Kubernetes delegate option in the Kubernetes Cluster cloud provider settings.

The process is as follows:

  1. Install a Harness Kubernetes delegate in your EKS cluster. Give it a name that you can recognize as an EKS cluster delegate. For information on installing a Kubernetes delegate, see Delegate Installation.
  2. Add a tag to the Delegate in Harness. To learn more about Delegate tags, see Delegate Installation.
  3. Add EKS as a Harness Cloud Provider using the Kubernetes Cluster type. In the Cloud Provider dialog, do the following:
  4. In Type, click Kubernetes Cluster.
  5. Click the Inherit Cluster Details from selected Delegate checkbox to enable it.
  6. In Delegate Name, select the Kubernetes Delegate you installed in the EKS cluster. You are selecting the Delegate tag name.
  7. Click SUBMIT.

When setting up the EKS cluster as a service infrastructure in a Harness environment, you simply select the EKS cloud provider you added to Harness in Cloud Provider. For example:

Using the EKS-based environment in a workflow is no different than using any Kubernetes cluster. You simply select the Environment as part of setting up the workflow.

OpenShift Support

This section describes how to support OpenShift using a Delegate running externally to the Kubernetes cluster. Harness does support running Delegates internally for OpenShift 3.11 or greater, but the cluster must be configured to allow images to run as root inside the container in order to write to the filesystem.

Typically, OpenShift is supported through an external Delegate installation (shell script installation of the Delegate outside of the Kubernetes cluster) and a service account token, entered in the Kubernetes Service Account Token field. You only need to use the Master URL and Kubernetes Service Account Token fields in the Kubernetes Cloud Provider dialog.

The following shell script is a quick method for obtaining the service account token. Run this script wherever you run kubectl to access the cluster.

Set the SERVICE_ACCOUNT_NAME and NAMESPACE values to the values in your infrastructure.

SERVICE_ACCOUNT_NAME=default
NAMESPACE=mynamepace
SECRET_NAME=$(kubectl get sa "${SERVICE_ACCOUNT_NAME}" --namespace "${NAMESPACE}" -o json | jq -r '.secrets[].name')
TOKEN=$(kubectl get secret "${SECRET_NAME}" --namespace "${NAMESPACE}" -o json | jq -r '.data["token"]' | base64 -D)
echo $TOKEN

Once configured, OpenShift is used by Harness as a typical Kubernetes cluster.

OpenShift Notes

For information on installing the delegate, see Delegate Installation.

Google Cloud Platform (GCP)

Recommended: Install and run the Harness Kubernetes Delegate in the GKE Kubernetes cluster, and then use the Kubernetes Cluster Cloud Provider to connect to that cluster using the Harness Kubernetes Delegate you installed. This is the easiest method to connect to a Kubernetes cluster. For more information, see Installation Example: Google Cloud Platform.

To add GCP as a Harness Cloud Provider, do the following:

  1. Click Setup, and then click Cloud Providers.
  2. Click Add Cloud Provider. The Cloud Provider dialog appears.
  3. When you choose Google Cloud Platform in Type, the Cloud Provider dialog changes for the GCP settings.

To obtain the Google Cloud's Account Service Key File, in GCP, do the following:

  1. Open the IAM & Admin page in the GCP Console.
  2. Select your project and click OPEN.
  3. In the left nav, click Service accounts.
  4. Look for the service account for which you wish to create a key, and click the service account name.
  5. In Service account details, click EDIT.
  6. Click CREATE KEY.
  7. Select a Key type and click Create. For more information, see Creating and Managing Service Account Keys from GCP.
  8. Upload the key file into the Harness Cloud Provider dialog.
  9. The Google Cloud Account Name field is automatically populated with the account name, but you can enter a new name.
  10. Click SUBMIT. The GCP cloud provider is added.

GCP Permissions Required

  • The GCP service account requires Kubernetes Engine Admin (GKE Admin) role to get the Kubernetes master username and password. Harness also requires Storage Object Viewer permissions.
  • When you attempt to connect to the Kubernetes cluster via GCP, the Kubernetes cluster must have Basic authentication enabled or the connection will fail. For more information, see Control plane security from GCP. From GCP:
You can handle cluster authentication in Google Kubernetes Engine by using Cloud IAM as the identity provider. However, legacy username-and-password-based authentication is enabled by default in Google Kubernetes Engine. For enhanced authentication security, you should ensure that you have disabled Basic Authentication by setting an empty username and password for the MasterAuth configuration. In the same configuration, you can also disable the client certificate which ensures that you have one less key to think about when locking down access to your cluster.
  • If Basic authentication is inadequate for your security requirements, use the Kubernetes Cluster Cloud Provider.
  • While Harness recommends that you use the Kubernetes Cluster Cloud Provider for Kubernetes cluster deployments, to use a Kubernetes cluster on Google GKE, Harness requires a combination of Basic Authentication and/or Client Certificate to be enabled on the cluster:
    This is required because some API classes, such as the MasterAuth class, require HTTP basic authentication or client certificates.

For steps to add roles to your service account, see Granting Roles to Service Accounts from Google. For more information, see Understanding Roles from GCP.

Another option is to use a service account that has only the Storage Object Viewer permission needed to query GCR, and then use either an in-cluster Kubernetes Delegate or a direct Kubernetes Cluster Cloud Provider with the Kubernetes service account token for performing deployment.

Google GCS and GCR

For Google Cloud Storage (GCS) and Google Container Registry (GCR), the following roles are required:

  • Viewer (roles/viewer)
  • Storage Object Admin (roles/storage.objectAdmin)

Stackdriver

Most APM and logging tools are added to Harness as Verification Providers. For Stackdriver, you use the Google Cloud Platform Cloud Provider.

Roles and Permissions
  • Stackdriver Logs - The minimum role requirement is logging.viewer
  • Stackdriver Metrics - The minimum role requirements are compute.networkViewer and monitoring.viewer.

See Access control from Google.

Amazon Web Services (AWS) Cloud

AWS is used as a Harness Cloud Provider for obtaining artifacts, deploying services, and for verifying deployments using CloudWatch:

Recommended: Install and run a Harness Delegate (ECS Delegate in an ECS cluster, Shell Script Delegate on an EC2 instance, etc) in the same VPC as the AWS resources you will use, and then use the Delegate for the AWS Cloud Provider credentials. This is the easiest method to connect to AWS. For more information, see Delegate Installation and Management.

To add AWS as a Harness Cloud Provider, do the following:

  1. Click Setup, and then click Cloud Providers.
  2. Click Add Cloud Provider. The Cloud Provider dialog appears.
  3. When you select Amazon Web Services in Type, the Cloud Provider dialog changes for the AWS settings.
  4. Choose a name for this provider. Theis to differentiate AWS providers in Harness. It is not the actual AWS account name.
  5. Select Assume the IAM Role of the Delegate (recommended), or Enter AWS Access Keys manually.
    1. If you selected Assume the IAM Role of the Delegate, in Delegate Tag, enter the Tag of the Delegate that this Cloud Provider will use for all connections. For information about Tags, see Delegate Tags.
    Presently, Harness does not support Assume the IAM Role of the Delegate with the Download Artifact command in a Harness Service. If you are using Download Artifact, use the Access and Secret Key settings in the AWS Cloud Provider.
    1. If you selected Enter AWS Access Keys manually, enter your Access Key and your Secret Key. For more information, see Access Keys (Access Key ID and Secret Access Key) from AWS.
The AWS IAM Policy Simulator is a useful tool for evaluating policies and access.

AWS Security Token Service (STS)

If you want to use one AWS account for the connection, but you want to deploy in a different AWS account, use the Assume STS Role option. This option uses the AWS Security Token Service (STS) feature.

In this scenario, the AWS account used for AWS access in Credentials will assume the IAM role you specify in Role ARN setting.

The Harness Delegate(s) always runs in the account you specify in Credentials via Access/Secret Key or Assume IAM Role on Delegate.

To assume the role in Role ARN, the AWS account in Credentials must be trusted by the role. The trust relationship is defined in the Role ARN role's trust policy when the role is created. That trust policy states which accounts are allowed to give that access to users in the account.

You can use Assume STS Role to establish trust between roles in the same account, but cross-account trust is more common.

The assumed role in Role ARN must have all the IAM policies required to perform your Harness deployment, such as Amazon S3, ECS (Existing Cluster), and AWS EC2 policies. For more information, see Assuming an IAM Role in the AWS CLI from AWS.To use AWS Security Token Service (STS) for cross-account access, do the following:

  1. Select the Assume STS Role option.
  2. In Role ARN, enter the Amazon Resource Name (ARN) of the role that you want to assume. This is an IAM role in the target deployment AWS account.
  3. (Optional) In External ID, if the administrator of the account to which the role belongs provided you with an external ID, then enter that value. For more information, see How to Use an External ID When Granting Access to Your AWS Resources to a Third Party from AWS.
The AWS IAM Policy Simulator is a useful tool for evaluating policies and access.

AWS Permissions

User: Harness requires the IAM user be able to make API requests to AWS. For more information, see Creating an IAM User in Your AWS Account from AWS.

User Access Type: Programmatic access. This enables an access key ID and secret access key for the AWS API, CLI, SDK, and other development tools.

Elastic Container Registry (ECR)

Policy Name:AmazonEC2ContainerRegistryReadOnly.

Policy ARN: arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryReadOnly.

Description: Provides read-only access to Amazon EC2 Container Registry repositories.

Policy JSON:

{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"ecr:GetAuthorizationToken",
"ecr:BatchCheckLayerAvailability",
"ecr:GetDownloadUrlForLayer",
"ecr:GetRepositoryPolicy",
"ecr:DescribeRepositories",
"ecr:ListImages",
"ecr:DescribeImages",
"ecr:BatchGetImage"
],
"Resource": "*"
}
]
}
DescribeRegions Action

Since ECR has regions associated with it, you need to create a Customer Managed Policy, add the DescribeRegions action to list those regions, and add that to the role used by the Cloud Provider.

{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": "ec2:DescribeRegions",
"Resource": "*"
}
]
}

Amazon S3

There are two policies required:

The AWS IAM Policy Simulator is a useful tool for evaluating policies and access.

Policy Name: AmazonS3ReadOnlyAccess.

Policy ARN: arn:aws:iam::aws:policy/AmazonS3ReadOnlyAccess.

Description: Provides read-only access to all buckets via the AWS Management Console.

Policy JSON:

{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:Get*",
"s3:List*"
],
"Resource": "*"
}
]
}

Policy Name: HarnessS3.

Description: Harness S3 policy that uses EC2 permissions. This is a customer-managed policy you must create. In this example we have named it HarnessS3.

Policy JSON:

{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": "ec2:DescribeRegions",
"Resource": "*"
}
]
}
If you want to use an S3 bucket that is in a separate account than the account used to set up the AWS Cloud Provider, you can grant cross-account bucket access. For more information, see Bucket Owner Granting Cross-Account Bucket Permissions from AWS.

ECS (Existing Cluster)

Recommended: Install and run the Harness ECS Delegate in the ECS cluster, and then use the AWS Cloud Provider to connect to that cluster using the Harness ECS Delegate you installed. This is the easiest method to connect to a ECS cluster. For more information, see Installation Example: Amazon Web Services and ECS.

Ensure that you add the IAM roles and policies to your ECS cluster when you create it. You cannot add the IAM roles and policies to an existing ECS cluster. You can add policies to whatever role is already assigned to an existing ECS cluster.

In addition to the default ECS role, ecsInstanceRole, these policies are required:

Attach both of these policies to the ecsInstanceRole role, and apply that to your ECS cluster when you create it. For information on ecsInstanceRole, see Amazon ECS Instance Role from AWS.

The AWS IAM Policy Simulator is a useful tool for evaluating policies and access.

ELB, ALB, and ECS

Policy Name: AmazonEC2ContainerServiceforEC2Role.

Policy ARN: arn:aws:iam::aws:policy/AmazonEC2ContainerServiceforEC2Role.

Description: Makes calls to the Amazon ECS API. For more information, see Amazon ECS Container Instance IAM Role from AWS.

Policy JSON:

{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"ecs:CreateCluster",
"ecs:DeregisterContainerInstance",
"ecs:DiscoverPollEndpoint",
"ecs:Poll",
"ecs:RegisterContainerInstance",
"ecs:StartTelemetrySession",
"ecs:UpdateContainerInstancesState",
"ecs:Submit*",
"ecr:GetAuthorizationToken",
"ecr:BatchCheckLayerAvailability",
"ecr:GetDownloadUrlForLayer",
"ecr:BatchGetImage",
"logs:CreateLogStream",
"logs:PutLogEvents"
],
"Resource": "*"
}
]
}

Policy Name: AmazonEC2ContainerServiceRole.

Policy ARN: arn:aws:iam::aws:policy/service-role/AmazonEC2ContainerServiceRole.

Description: Default policy for Amazon ECS service role.

{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"ec2:AuthorizeSecurityGroupIngress",
"ec2:Describe*",
"elasticloadbalancing:DeregisterInstancesFromLoadBalancer",
"elasticloadbalancing:DeregisterTargets",
"elasticloadbalancing:Describe*",
"elasticloadbalancing:RegisterInstancesWithLoadBalancer",
"elasticloadbalancing:RegisterTargets"
],
"Resource": "*"
}
]
}

Policy Name: HarnessECS.

Description: Harness ECS policy. This is a customer-managed policy you must create. In this example we have named it HarnessECS.

Policy JSON:

{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"ecr:DescribeRepositories",
"ecs:ListClusters",
"ecs:ListServices",
"ecs:DescribeServices",
"ecr:ListImages",
"ecs:RegisterTaskDefinition",
"ecs:CreateService",
"ecs:ListTasks",
"ecs:DescribeTasks",
"ecs:CreateService",
"ecs:DeleteService",
"ecs:UpdateService",
"ecs:DescribeContainerInstances",
"ecs:DescribeTaskDefinition",
"application-autoscaling:DescribeScalableTargets",
"iam:ListRoles",
"iam:PassRole"
],
"Resource": "*"
}
]
}

Notes

  • There is a limit on how many policies you can attach to a IAM role. If you exceed the limit, copy the permissions JSON under Action, create a single custom policy, and add them to the policy.
  • Due to an AWS limitation, Harness is unable to limit the three actions for ECS to Create, Update, and DeleteService for just a specific cluster/resource. This limitation is why we require Resource *.
  • ECS with Public Docker Registry: Both ECS permissions are required.
  • ECS with Private Docker Registry: Both ECS permissions are required. Also, the Docker agent on the container host should be configured to authenticate with the private registry. Please refer to AWS documentation here.
  • ECS with ECR: For ECS and ECR, both sections permissions are required.
  • ECS with GCR: This is currently not supported.

Auto Scaling with ECS

For Auto Scaling, the AWS Managed policy AWSApplicationAutoscalingECSServicePolicy should be attached to the default ecsInstanceRole role, and applied to your ECS cluster when you create it.

For information on AWSApplicationAutoscalingECSServicePolicy, see Amazon ECS Service Auto Scaling IAM Role from AWS. For information on ecsInstanceRole, see Amazon ECS Instance Role from AWS.

Policy Name: AWSApplicationAutoscalingECSServicePolicy.

Policy ARN: arn:aws:iam::aws:policy/AWSApplicationAutoscalingECSServicePolicy.

Description: Describes your CloudWatch alarms and registered services, as well as permissions to update your Amazon ECS service's desired count on your behalf.

Policy JSON:

{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"ecs:DescribeServices",
"ecs:UpdateService",
"cloudwatch:PutMetricAlarm",
"cloudwatch:DescribeAlarms",
"cloudwatch:DeleteAlarms"
],
"Resource": [
"*"
]
}
]
}

AWS CodeDeploy

The AWS IAM Policy Simulator is a useful tool for evaluating policies and access.

There are two policies required: AWSCodeDeployRole and AWSCodeDeployDeployerAccess.

Policy Name: AWSCodeDeployRole.

Policy ARN: arn:aws:iam::aws:policy/service-role/AWSCodeDeployRole.

Description: Provides CodeDeploy service access to expand tags and interact with Auto Scaling on your behalf.

Policy JSON:

{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"autoscaling:CompleteLifecycleAction",
"autoscaling:DeleteLifecycleHook",
"autoscaling:DescribeAutoScalingGroups",
"autoscaling:DescribeLifecycleHooks",
"autoscaling:PutLifecycleHook",
"autoscaling:RecordLifecycleActionHeartbeat",
"autoscaling:CreateAutoScalingGroup",
"autoscaling:UpdateAutoScalingGroup",
"autoscaling:EnableMetricsCollection",
"autoscaling:DescribeAutoScalingGroups",
"autoscaling:DescribePolicies",
"autoscaling:DescribeScheduledActions",
"autoscaling:DescribeNotificationConfigurations",
"autoscaling:DescribeLifecycleHooks",
"autoscaling:SuspendProcesses",
"autoscaling:ResumeProcesses",
"autoscaling:AttachLoadBalancers",
"autoscaling:PutScalingPolicy",
"autoscaling:PutScheduledUpdateGroupAction",
"autoscaling:PutNotificationConfiguration",
"autoscaling:PutLifecycleHook",
"autoscaling:DescribeScalingActivities",
"autoscaling:DeleteAutoScalingGroup",
"ec2:DescribeInstances",
"ec2:DescribeInstanceStatus",
"ec2:TerminateInstances",
"tag:GetTags",
"tag:GetResources",
"sns:Publish",
"cloudwatch:DescribeAlarms",
"elasticloadbalancing:DescribeLoadBalancers",
"elasticloadbalancing:DescribeInstanceHealth",
"elasticloadbalancing:RegisterInstancesWithLoadBalancer",
"elasticloadbalancing:DeregisterInstancesFromLoadBalancer",
"elasticloadbalancing:DescribeTargetGroups",
"elasticloadbalancing:DescribeTargetHealth",
"elasticloadbalancing:RegisterTargets",
"elasticloadbalancing:DeregisterTargets"
],
"Resource": "*"
}
]
}

Policy Name: AWSCodeDeployDeployerAccess.

Policy ARN: arn:aws:iam::aws:policy/AWSCodeDeployDeployerAccess.

Description: Provides access to register and deploy a revision.

Policy JSON:

{
"Version": "2012-10-17",
"Statement": [
{
"Action": [
"codedeploy:Batch*",
"codedeploy:CreateDeployment",
"codedeploy:Get*",
"codedeploy:List*",
"codedeploy:RegisterApplicationRevision"
],
"Effect": "Allow",
"Resource": "*"
}
]
}

AWS EC2

The AWS IAM Policy Simulator is a useful tool for evaluating policies and access.

Provisioned and Static Hosts

Policy Name: AmazonEC2FullAccess.

Policy ARN: arn:aws:iam::aws:policy/AmazonEC2FullAccess.

Description: Provides full access to Amazon EC2 via the AWS Management Console.

Policy JSON:

{
"Version": "2012-10-17",
"Statement": [
{
"Action": "ec2:*",
"Effect": "Allow",
"Resource": "*"
},
{
"Effect": "Allow",
"Action": "elasticloadbalancing:*",
"Resource": "*"
},
{
"Effect": "Allow",
"Action": "cloudwatch:*",
"Resource": "*"
},
{
"Effect": "Allow",
"Action": "autoscaling:*",
"Resource": "*"
}
]
}
Trusted entities

Newly created roles under Amazon EC2 have trusted entities listed as ec2.amazonaws.com. For ECS, this needs to be updated with ecs.amazonaws.com. See the AWS documentation at Amazon ECS Service Scheduler IAM Role.

Amazon Lambda

The IAM role attached to your Delegate host (either an EC2 instance or ECS Task) must have the AWSLambdaRole policy attached. The policy contains the lambda:InvokeFunction needed for Lambda deployments.

For details on connecting Lambda with S3 across AWS accounts, see How do I allow my Lambda execution role to access my Amazon S3 bucket? from AWS.

Policy Name: AWSLambdaRole.

Policy ARN: arn:aws:iam::aws:policy/service-role/AWSLambdaRole.

Description: Default policy for AWS Lambda service role.

Policy JSON:

{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"lambda:InvokeFunction"
],
"Resource": [
"*"
]
}
]
}

For more information, see Identity-based IAM Policies for AWS Lambda from AWS.

Ensure that the IAM role assigned to the Delegate has the IAMReadOnlyAccess (arn:aws:iam::aws:policy/IAMReadOnlyAccess) policy attached. This enables Harness to ensure that AWSLambdaRole policy is attached.

Microsoft Azure

When you choose Microsoft Azure in Type, the Cloud Provider dialog changes for the Azure settings.

You can find the information you need in the App registration Overview page:

The Microsoft Azure dialog has the following fields.

Field

Description

Client ID

This is the Client/Application ID for the Azure app registration you are using. It is found in the Azure Active Directory App registrations. For more information, see Quickstart: Register an app with the Azure Active Directory v1.0 endpoint from Microsoft.

To access resources in your Azure subscription, you must assign the Azure App registration using this Client ID to a role in that subscription. Later, when you set up an Azure service infrastructure in a Harness environment, you will select a subscription.

If the Azure App registration using this Client ID is not assigned a role in a subscription, no subscriptions will be available.For more information, see Assign the application to a role and Use the portal to create an Azure AD application and service principal that can access resources from Microsoft.

Tenant ID

The Tenant ID is the ID of the Azure Active Directory (AAD) in which you created your application. This is also called the Directory ID. For more information, see Get tenant ID and Use the portal to create an Azure AD application and service principal that can access resources from Azure.

Key

Authentication key for your application. This is found in Azure Active Directory, App Registrations. Click the App name. Click Certificates & secrets, and then click New client secret.

You cannot view existing secret values, but you can create a new key. For more information, see Create a new application secret from Azure.

Permissions

  • For Azure Container Repository (ACR): The Client ID (Application ID) must be assigned to a role that has the Reader permission on the resource group of the ACR container. This is the minimum requirement.
  • For Azure Kubernetes Services (AKS): The Client ID (Application ID) must be assigned to a role that has the Owner permission on the AKS cluster. If you are using the Kubernetes Cloud Provider and the Kubernetes Delegate in the AKS cluster, then AKS permissions are not required at all. This is recommended.

Pivotal Cloud Foundry (PCF)

When you choose PCF in Type, the Cloud Provider dialog changes for the PCF settings.

The PCF dialog has the following fields.

Field

Description

Endpoint URL

Enter the API endpoint URL, without URL scheme. For example, api.run.pivotal.io. Omit http://.For more information, see Identifying the API Endpoint for your PAS Instance from Pivotal.

Username / Password

Username and password for the PCF account to use for this connection.

Pivotal Cloud Foundry Name

Provide a unique name for this connection. You will use this name to select this connection when creating a service infrastructure for your deployment.

Usage Scope

If you want to restrict the use of a provider to specific applications and environments, do the following:

In Usage Scope, click the drop-down under Applications, and click the name of the application.

In Environments, click the name of the environment.

PCF Permissions

PCF user account with Admin, Org Manager, or Space Manager role. The user account must be able to update spaces, orgs, and applications.

For more information, see Orgs, Spaces, Roles, and Permissions from Pivotal.

Physical Data Center

When you choose Physical Data Center in Type, the Cloud Provider dialog changes for the Physical Data Center settings.

For a Physical Data Center Cloud Provider, no credentials are required. Instead, you add an SSH secret in Harness Secrets Management, and select that later in your Harness Environment in Connection Attributes. For more information, see Secrets Management.


How did we do?