Terrform Provisioner

Updated 4 days ago by Michael Cretzman

Harness has first-class support for HashiCorp Terraform as an infrastructure provisioner. This document contains the following information on using the Harness Terraform Infrastructure Provisioner in your Harness Application:

Overview

When creating your Harness Application, there is an assumption that you already have an infrastructure in place where you want to deploy your services. In some cases, you will want to use an Infrastructure Provisioner to define this infrastructure on the fly.

You use a Terraform Infrastructure Provisioner in the following ways:

  • Terraform Infrastructure Provisioner - Add a Terraform Provisioner as a blueprint for the infrastructure for the Service(s) you deploy. You add the provisioner script by connecting to a Git repo where the scripts are kept. You simply need to map some of the output variables in the script to the required fields in Harness. When Harness deploys your microservice, it will build your infrastructure according to this blueprint.
  • Service Infrastructure - Select the Terraform Provisioner in a Service Infrastructure in an Environment and then use this Service Infrastructure in any Workflow where you want to provision the deployment infrastructure.
  • Workflow Step - Add a Terraform Provisioner step to a Workflow to build the infrastructure according to your Terraform script, and then deploy to the provisioned infrastructure.

Permissions

The permissions required for Harness to use your provisioner and successfully deploy to the provisioned instances depends on the deployment platform you provision. The permissions are discussed in this topic in the configuration steps where they are applied, but, as a summary, you will need to manage the following permissions:

  • Delegate - The Delegate will require permissions according to the deployment platform. It will use the access, secret, and SSH keys you configure in Harness Secrets Management to perform deployment operations. For ECS Delegates, you can add an IAM role to the ECS Delegate task definition. For more information, see Trust Relationships and Roles.
  • Cloud Provider - The Cloud Provider must have access permissions for the resources you are planning to create in the Terraform script. For some Harness Cloud Providers, you can use the installed Delegate and have the Cloud Provider assume the permissions used by the Delegate. For others, you can enter cloud platform account information.
The account used for the Cloud Provider will require platform-specific permissions for creating infrastructure. For example, to create EC2 AMIs the account requires the AmazonEC2FullAccess policy.
  • Git Repo - You will add the Git repo where the provisioner script is located to Harness as a Source Repo Provider. For more information, see Add Source Repo Providers.
  • Access and Secret Keys - These are set up in Harness Secrets Management and then used as variable values when you add a Provisioner step to a Workflow.
  • SSH Key - In order for the Delegate to copy artifacts to the provisioned instances, it will need an SSH key. You set this up in Harness Secrets Management and then reference it in the Harness Environment Service Infrastructure.
  • Platform Security Groups - Security groups are associated with EC2 and other cloud platform instances and provide security at the protocol and port access level. You will need to define security groups in your provisioner scripts and ensure that they allow the Delegate to connect to the provisioned instances.

Delegate and Cloud Provider Setup

This section describes the Harness Delegate and Cloud Provider setup for Terraform provisioning.

There are many types of Delegates, such as Shell Script, ECS, and Kubernetes, and Cloud Providers can connect Harness to your deployment platform using other account methods. For more information, see Delegate Installation and Management and Add Cloud Providers.

Delegate and Terraform Setup

The Delegate should be installed where it can connect to the provisioned instances. Ideally, this is the same subnet, but if you are provisioning the subnet then you can put the Delegate in the same VPC and ensure that it can connect to the provisioned subnet using security groups.

To set up the Delegate, do the following:

  1. Install the Delegate on a host where it will have connectivity to your provisioned instances. To install a Delegate, follow the steps in Delegate Installation and Management. Once the Delegate is installed, it will be listed on the Harness Delegates page.
  2. If needed, add a Delegate Tag. Some Delegates, like a Kubernetes Cluster Delegate, do not need a Tag and can be referenced in a Kubernetes Cluster Cloud Provider by name. ECS and Shell Script Delegates require a Delegate Tag. For steps on installing a Delegate Tag, see Delegate Tags.
  3. Install Terraform on the Delegate. Terraform must be installed on the Delegate to use a Harness Terraform Provisioner. You can install Terraform manually or use a Delegate Profile. To use a Delegate Profile to install Terraform, do the following:
    1. In Harness, click Setup and then Harness Delegates.
    2. Click Manage Delegate Profiles, and then click Add Delegate Profile. The Manage Delegate Profile dialog appears.
    1. In Name, enter a name for the profile, such as Terraform Install and Setup.
    2. In Startup Script, enter the Terraform installation script you want to run when the profile is applied. Your script might vary depending on the instance type hosting the Delegate. For example:

    curl -O -L https://releases.hashicorp.com/terraform/0.11.13/terraform_0.11.13_linux_amd64.zip
    unzip terraform_0.11.13_linux_amd64.zip
    sudo mv terraform /usr/local/bin/
    terraform --version
    1. When you are finished, click SUBMIT.
    2. Locate your Delegate listing, and next to Profile, click the dropdown menu and select the Profile you created. In a few minutes, a timestamp link appears. Click the link to see the log.  

Terraform is now installed on the Delegate.

If you will be using a Cloud Provider that uses Delegate Tags to identify Delegates, add a Tag to this Delegate. For more information, see Delegate Tags. When you are done, the Delegate listing will look something like this:

Cloud Provider Setup

Harness uses Cloud Providers to connect to the different platforms where you provision infrastructure, such as a Kubernetes Cluster and AWS. When you create the Cloud Provider, you can enter the platform account information for the Cloud Provider to use as credentials, or you can use a Delegate running in the infrastructure to provide the credentials for the Cloud Provider.

If you are building an infrastructure on a platform that requires specific permissions, such as AWS AMIs, the account used by the Cloud Provider needs the required policies. For example, to create AWS EC2 AMIs, the account needs the AmazonEC2FullAccess policy. See the list of policies in Add Cloud Providers. For steps on adding an AWS Cloud Provider, see Amazon Web Services (AWS) Cloud.

When the Cloud Provider uses the installed Delegate for credentials, it assumes the Delegate's role(s). For example, here is an AWS Cloud Provider assuming the IAM role from the Delegate where you installed Terraform.

Git Repo Setup

To use your Terraform script in Harness you host the script in a Git repo and add a Harness Source Repo Provider that connects Harness to the repo. For steps on adding the Source Repo Provider, see Add Source Repo Providers.

Here is an example of a Source Repo Provider and the GitHub repo it is using:

In the image above, there is no branch added in the Source Repo Provider Branch Name field as this a the master branch, and the ec2 folder in the repo is not entered in the Source Repo Provider. Later, when you use the Source Repo Provider in your Terraform Provisioner, you can specify the branch and root directory:

Artifact Server Setup

There are no Artifact Server settings specific to setting up a Harness Infrastructure Provisioner. When you deploy, Harness performs an Artifact Check of the artifact associated with a Service after Harness provisions the infrastructure, and then later deploys the artifact to the provisioned infrastructure.

You will need to set up an Artifact Server to add an artifact to your Harness Service. For steps on setting up a Artifact Server, see Add Artifact Servers.

Application and Service Setup

Infrastructure Provisioners are added to a Harness Application and then incorporated into the Environment and Workflow components.

Any Harness Service can be deployed using the Infrastructure Provisioner. For the example in this article, we use a SSH type Service with a TAR File artifact type:

In the Service, we add a TAR file of Apache Tomcat from an AWS S3 bucket.

In this example, a Cloud Provider is used to associate the artifact instead of an Artifact Server because we are using AWS S3. AWS is always used as a Cloud Provider. If you want to use the same Cloud Provider for the Infrastructure Provisioner and artifact collection, ensure that the Delegate can connect to the artifact repo.

Terraform Provisioner Setup

This section will walk you through a detailed setup of a Terraform Provisioner for a deployment to a AWS EC2 VPC, and provide examples of the other supported platforms.

Harness supports first class Terraform provisioning for AWS-based infrastructures (SSH, ASG, ECS, Lambda) and Google Kubernetes (GKE).

For all of the supported platforms, setting up the Terraform Provisioner involves the following steps:

  1. Add your Terraform script via its Git Repo so Harness can pull the script.
  2. Map the relevant Terraform output variables from the script to the required Harness fields for the deployment platform (AWS, Kubernetes, etc).

To set up a Terraform Infrastructure Provisioner, do the following:

  1. In your Harness Application, click Infrastructure Provisioners.
  2. Click Add Infrastructure Provisioner, and then click Terraform. The Add Terraform Provisioner dialog appears.
  3. In Display Name, enter the name for this provisioner. You will use this name to select this provisioner in Harness Environments and Workflows.
  4. Click NEXT. The Script Repository section appears. This is where you provide the location of your Terraform script in your Git repo.
  5. In Script Repository, in Git Repository, select the Source Repo Provider you added for the Git repo where your script is located.
  6. In Git Repository Branch, enter the repo branch to use. For example, master.
  7. In Terraform Configuration Root Directory, enter the folder where the script is located. Here is an example showing the Git repo on GitHub and the Script Repository settings:
  8. Click NEXT. The Variables section is displayed. This is where you will add the script input variables that must be given values when the script is run.
  9. In Variables, click Populate Variables. The Populate from Example dialog appears. Click SUBMIT to have the Harness Delegate use the Source Repo Provider you added to pull the variables from your script and populate the Variables section.
    If Harness cannot pull the variables from your script, check your settings and try again. Ensure that your Source Repo Provisioner is working by clicking its TEST button.

    Once Harness pulls in the variables from the script, it populates the Variables section.
    In the Type column for each variable, specify Text or Encrypted Text. When you add the provisioner to a Workflow, you will have to provide text values for Text variables, and select Harness Encrypted Text variables for Encrypted Text variables. This will be described later in this article, but you can read about Encrypted Text variables in Secrets Management.
  10. Click NEXT. The Backend Configuration (Remote state) section appears. This is an optional step.

    By default, Terraform uses the local backend to manage state, in a local JSON file named terraform.tfstate on the disk where you are running Terraform. With remote state, Terraform writes the state data to a persistent remote data store (such as an S3 bucket or HashiCorp Consul), which can then be shared between all members of a team. You can add the backend configs (remote state variables) for remote state to your Terraform Provisioner in Backend Configuration (Remote state).
  11. In Backend Configuration (Remote state), enter the backend configs from your script.
  12. Click NEXT. The Terraform Provisioner dialog will look something like this:
  13. Click SUBMIT. The Terraform Provisioner is created.

Next you will create Service Mappings to map specific script settings to the fields Harness requires for provisioning.

Script Repository Expression Support

When you specify the Script Repository for the Terraform Provisioner, you select the Source Repo Provider for the Git repo where your script is located, the Git repo branch to use, and root directory for the Terraform configuration file (config.tf).

You can also use expressions in the Git Repository Branch and Terraform Configuration Root Directory and have them replaced by Workflow variable values when the Terraform Provisioner is used by the Workflow. For example, a Workflow can have variables for branch and path:

In Script Repository, you can enter variables as ${workflow.variables.branch} and ${workflow.variables.path}:

You cannot use variables in the Script Repository fields to populate the Variables section. To populate the Variables section, click Populate from Example and enter in actual values.

When the Workflow is deployed, you are prompted to provide values for the Workflow variables, which are then applied to the Script Repository settings:

This allows the same Terraform Provisioner to be used by multiple Workflows, where each Workflow can use a different branch and path for the Script Repository.

Service Mappings

Harness supports first class Service Mapping for AWS-based infrastructures (SSH, ASG, ECS, Lambda) and Google Kubernetes (GKE).

In the Terraform Provisioner Service Mappings, you map Terraform outputs from your Terraform script to the fields Harness requires for provisioning. Service Mappings provide Harness with the minimum settings needed to provision using your script.

If you have been running your deployments manually, you might not have outputs configured in your template files. To configure Service Mappings, you will need to add these output variables to your template.

To create a Service Mapping, do the following:

  1. In your Terraform Provisioner, in Service Mappings, click Add Service Mapping. The Service Mapping dialog appears.
    The mapping required by Harness depends on the type of Harness Service you are deploying and the Cloud Provider, and hence deployment platform, you are using for deployment. In the following steps, we will map a TAR type Service with an AWS Cloud Provider.
  2. In Service, enter the Harness Service you are deploying using the provisioner.
  3. In Deployment Type, select the type of deployment, such as SSH, ECS, or Kubernetes.
  4. In Cloud Provider Type, select the type of Cloud Provider you set up to connect to your deployment environment.
  5. Click NEXT.
  6. In Configuration, map the required fields to your Terraform script outputs. The following sections provide examples for the common deployment types. When you are finished, click NEXT and then SUBMIT. The Service Mapping is added to the Terraform Provisioner and you can move on to using the provisioner in an Environment.

You map the Terraform script outputs using this syntax, where exact_name is the name of the output:

${terrafrom.exact_name}
SSH

The Secure Shell (SSH) deployment type requires the Region and Tags fields. The following example shows the Terraform script outputs used for the mandatory SSH deployment type fields:

ECS

The ECS deployment type requires the Region and Cluster fields. The following example shows the Terraform script outputs used for the mandatory ECS deployment type fields:

Kubernetes

The Kubernetes deployment type requires the Cluster Name and Namespace fields. The following example shows the Terraform script outputs used for the mandatory Kubernetes deployment type fields:

Lambda

The Lambda deployment type requires the IAM Role and Region fields. The following example shows the Terraform script outputs used for the mandatory and optional Lambda deployment type fields:

Environment Setup

A Harness Environment represents your production and non-production deployment environments and contains Service Infrastructure settings to specify a deployment infrastructure, using a Cloud Provider, a deployment type, such as ECS, and the specific infrastructure details for the deployment, like VPC settings.

Typically, when you add an Environment, you specify the Service Infrastructure for an existing infrastructure. To use your Terraform Provisioner, you add the Terraform Provisioner to the Service Infrastructure to identify a dynamically provisioned infrastructure that will exist.

The following image shows two Service Infrastructures for an Environment. One infrastructure is for an already-provisioned infrastructure and one is using a Terraform Provisioner to dynamically provision the infrastructure.

Later, when you create a Workflow, you will use a Terraform Provisioner step to provision the infrastructure. During deployment, the Terraform Provisioner step will provision the infrastructure and then the Workflow will deploy to it via the Environment Service Infrastructure.

To use a Terraform Provisioner in an Environment Service Infrastructure, do the following:

  1. In your Harness Application, click Environments. The Environments page appears.
  2. Click Add Environment. The Environment dialog appears.
  3. In the Environment dialog, enter a Name and select an Environment Type, such as Non-Production. When you are done, it will look something like this:
  4. Click SUBMIT. The new Environment appears. Next, you will add a Service Infrastructure using the Terraform Infrastructure Provisioner.
  5. Click Add Service Infrastructure. The Service Infrastructure dialog appears.
  6. For the Select Cloud Provider section, enter the same Service and Cloud Provider you are using in the Terraform Infrastructure Provisioner.
  7. Click Next. The Configuration settings appear.
  8. In Provision Type, select Dynamically Provisioned.
  9. In Provisioner, select your Terraform Infrastructure Provisioner. If you do not see it listed, then the Select Cloud Provider section is not using the same Service or Cloud Provider as your Terraform Infrastructure Provisioner.
  10. In Connection Attributes, select the SSH credentials to use when connecting to the provisioned instance.

    For example, if your Terraform Provisioner will create an AMI, you need to have a key_name argument in your script and a SSH secret in Harness using the same pem file associated with the key_name value. The following image shows how the SSH secret in Harness Secrets Management is used in Connection Attributes and how the Terraform script includes the key_name argument that uses the same key as the pem file.
    For information on creating SSH credentials, see Secrets Management.
  11. In Host Name Convention, enter a naming convention or use the default Harness expression.
  12. Select Use Public DNS for connection if you want to the Harness Delegate to use DNS to resolve the hostname. This is a common scenario. If you use DNS, ensure that your instances will have a public IP registered. For example, the following Terraform script includes the associate_public_ip_address argument set to true:
    In AWS, you can can control whether your instance receives a public IP address in a number of ways. For more information, see Public IPv4 Addresses and External DNS Hostnames from AWS.

    When you are done, the dialog will look something like the following Service Infrastructure for a SSH deployment:

Now that you have set up a Service Infrastructure, you can use it in a Workflow along with a Terraform Provisioner step that provisions the infrastructure.

Workflow Setup

Once you have a Terraform Provisioner and an Environment Service Infrastructure that uses it, you can use the Terraform Provisioner in a Workflow's pre-deployment steps. The Terraform Provisioner step will provision the infrastructure and then the Workflow will deploy to it via the Environment Service Infrastructure.

The following image shows how the Workflow uses the Terraform Provisioner step to create the infrastructure, then selects the node(s) in the infrastructure for deployment, and finally deploys to AWS EC2.

Terraform Options

Before creating the Workflow, here are a few features to be aware of:

  • Terraform Dry Run - The Terraform Provisioner step in the Workflow can be executed as a dry run, just like running the terraform plan command. The dry run will refresh the state file and generate a plan.
  • Terraform Target Support - You can target one or more specific modules in your Terraform script, just like using the terraform plan -target command. See Resource Targeting from Terraform. You can also use Workflow variables as your targets.
  • Using tfvars Files - You can use the input variables from the Terraform script in your Terraform Provisioner or use a tfvars file in the same repository, just like using the terraform apply -var-file command. See Variable Definitions (.tfvars) Files from Terraform.
  • Terraform Destroy - As a post-deployment step, you can add a Terraform Destroy step to remove the provisioned infrastructure, just like running the terraform destroy command. See destroy from Terraform.

All of these features are discussed in detail below.

Workflow Creation

To use a Terraform Provisioner in your Workflow, do the following:

  1. In your Harness Application, click Workflows.
  2. Click Add Workflow. The Workflow dialog appears.
  3. Enter a name and description for the Workflow.
  4. In Workflow Type, select Canary.
Harness Infrastructure Provisioners are only supported in Canary and Multi-Service types.
  1. In Environment, select the Environment that has the Terraform Provisioner set up in one of its Service Infrastructures.
  2. Click SUBMIT. The new Workflows is created.

By default, the Workflow includes a Pre-deployment Steps section. This is where you will add a step that uses your Terraform Provisioner.

Terraform Provisioner Step

To add the Terraform Provisioner Step, do the following:

  1. In your Workflow, in Pre-deployment Steps, click Add Step. A dialog containing the available steps appears.

  1. In Provisioners, click Terraform Provision. The Terraform Provision dialog appears.

  1. In Provisioner, select a Harness Terraform Provisioner.
  2. In Timeout, enter how long Harness should wait to apply the Terraform Provisioner before failing the Workflow.
The Inherit following configurations from dry run setting is described in Terraform Dry Run.
  1. Click NEXT. The Input Values settings appear.
Input Values

The Input Values are the same variables from the Terraform Provisioner Variables section.

Select a value for each variable in Input Values. For encrypted text values, select a Encrypted Text secret from Harness Secrets Management.

For more information, see Secrets Management.

The Input Values section also includes the Use tfvar files option for using a variable definitions file instead of using the variables from the Terraform Provisioner. The path to the variable definitions file is relative to the root of the Git repo specified in the Terraform Provisioner setting. For example, in the following image, the testing.tfvars file is located in the repo at terraform/ec2/testing/testing.tfvars:

If Use tfvar files is selected and there are also Inline Values, when Harness loads the variables from the tfvars file, the Inline Values variables override the variables from the tfvars file.

Click NEXT. The Backend Configuration (Remote state) section appears.

Backend Configuration (Remote state)

The Backend Configuration (Remote state) section contains the same remote state values set up in the Backend Configuration (Remote state) section of the Terraform Provisioner you selected.

Enter values for each backend config (remote state variable), and click NEXT. The Targets section appears.

Targets

You can target one or more specific modules in your Terraform script, just like using the terraform plan -target command. See Resource Targeting from Terraform.

For example, in the following image you can the Terraform script has one resource and two modules and the Targets section displays all three as potential targets.

If you have multiple modules in your script and you do not select one in Targets, all modules are used.

You can also use Workflow variables as your targets. For example, you can create a Workflow variable named module and then enter the variable ${workflow.variables.module} in the Targets field. When you deploy the Workflow, you are prompted to provide a value for the variable:

Click NEXT. The Terraform Provision steps is added to the Workflow.

Terraform Dry Run

The Terraform Provision step in the Workflow can be executed as a dry run, just like running the terraform plan command. The dry run will refresh the state file and generate a plan but it is not applied. You can then set up an Approval step to follow the dry run, followed by the Terraform Provision step to apply the plan.

To set up Terraform dry run in your Workflow, do the following:

  1. Add your Terraform Provision step, but select the Set as dry run option.

  1. To add the Approval step, click Add Step, and select Approval.

  1. In the Approval step, select whatever approval options you want, and then click SUBMIT.
  2. Next, click Add Step and select Terraform Provision.
  3. In Terraform Provision, select your Terraform Provisioner, and then select Inherit following configurations from dry run. All of the remaining settings are disabled because they are inherited from the Dry Run step.
  4. Click SUBMIT.

Your Workflow now performs a dry run, asks for an approval, and then, once approved, applies the Terraform script.

You can rename the Terraform Provision_2 step to Terraform Apply or some other name that shows that it is applying the Dry Run step.

If the Approval step takes a long time to be approved there is the possibility that a new commit occurs in the Git repo containing for Terraform script. To avoid a problem, when the Workflow performs the dry run, it saves the commit ID of the script file. Later, after the approval, the Terraform Provision step will use the commit ID to ensure that it executes the script that was dry run.

Terraform Destroy

In the Post-deployment Steps of the Workflow, you can add a Terraform Destroy step to remove the provisioned infrastructure, just like running the terraform destroy command. See destroy from Terraform.

  1. To add the Terraform Destroy step, in the Post-deployment Steps of the Workflow, click Add Step.
  2. In the step list, click Terraform Destroy.

The Terraform Destroy dialog appears.

  1. In Provisioner, select the Terraform Provisioner you are using in the Pre-deployment Steps, and then click SUBMIT. The Terraform Destroy step is added to the Workflow.

Deployment Example

This section describes the deployment steps for a Workflow using the Terraform Provision step and deploying to a provisioned AMI.

This is a Canary deployment Workflow, but we are only interested in Phase 1 where the Terraform provisioning occurs, and the artifact is installed in the provisioned AMI. Phase 2 of the Canary deployment is omitted.

In the Pre-Deployment section, the Terraform Provision step is executed. When you click the step you can see the Terraform command executed in Details.

Note the DNS name of the AMI in the dns output:

You will see this name used next.

In Phase 1 of the Canary deployment, click Select Nodes to see that Harness has selected the provisioned AMI as the deployment target host. See that it used the same DNS name as the the output in the Terraform Provision step:

Lastly, expand the Deploy Service step, and then click Install. You will see that the DNS name is shown on the arrow leading to install, and that the Details section displays the internal Delegate and provisioned target host addresses.

As you can see, the artifact was copied to the provisioned host. Deployment was a success.

Deployment Rollback

If you have successfully deployed Terraform modules and on the next deployment there is an error that initiates a rollback, Harness will roll back the provisioned infrastructure to the previous, successful version of the Terraform state. Harness will not increment the serial in the state, but perform a hard rollback to the exact version of the state provided.

Rollback Limitations

If you deployed two modules successfully already, module1 and module2, and then attempted to deploy module3, but failed, Harness will roll back to the successful state of module1 and module2.

However, let's look at the situation where module3 succeeds and now you have module1, module2, and module3 deployed. If the next deployment fails the rollback will only roll back to the Terraform state with module3 deployed. Module1 and module2 were not in the previous Terraform state, so the rollback excludes them.

Troubleshooting

This section lists some of the common exceptions that can occur and provides steps to fix the errors.

Delegate Could Not Reach the Resource

In order for the Delegate to reach the provisioned instances it must have connectivity and permission to SSH into the instance and perform deployment tasks. Depending on the platform where you deploy, this can involve different settings.

For example, the following Terraform script provisions an AMI to AWS EC2 and contains the same subnet (subnet_id) and security group (security_groups) of the Delegate, and it includes the boolean associate_public_ip_address setting to ensure that the Delegate can resolve the AMI's name.

resource "aws_instance" "tf_instance" {
subnet_id = "subnet-05788710b1b06b6b1"
security_groups = ["sg-05e7b8b9cad94b393"]
key_name = "doc-delegate1"
ami = "ami-0080e4c5bc078760e"
instance_type = "t2.micro"
associate_public_ip_address = "true"
tags {
Name = "${var.tag}"
}
}

In this case, the Harness Environment Service Infrastructure will also have the Use Public DNS for connection option selected.

Key Changed

The variables populated in the Infrastructure Provisioner Variables section are assigned a type.

If the type changes from Encrypted Text to Text, which can happen if you re-populate the variable, you might receive the following error on deployment:

Invalid request: The type of variable access_key has changed. 
Please correct it in the workflow step.

Simply correct the variable type in Variables.

Variable type is also used when you add a provisioner step to the Workflow.

You should ensure that the type selected in the Provisioner Variables section is preserved in the Workflow provisioner Input Values step.

Provisioner mapping region was not resolved

If you have mapped provisioner settings in Service Mappings but your provisioner configuration does not have outputs for those variables, you will receive the following error:

Invalid request: The infrastructure provisioner mapping region 
was not resolved from the provisioner outputs

Error Validating Provider Credentials

If you are using access or secret keys in your provisioner configuration and AWS is unable to validate them, you will receive the following error:

provider.aws: error validating provider credentials: 
error calling sts:GetCallerIdentity:
InvalidClientTokenId: The security token included in the request is invalid.

Ensure that your access or secret keys are valid. If you are suing Harness Encrypted Text secrets to manage the keys, then update the Encrypted Text secrets with the correct values. For more information, see Secrets Management.


How did we do?