Provision using the Terraform Provisioner Step

Updated 1 month ago by Michael Cretzman

This topic describes how to provision infrastructure using the Workflow Terraform Provisioner step.

You use the Terraform Provisioner step in a Workflow to run the Terraform script added in a Harness Terraform Infrastructure Provisioner.

During deployment, the Terraform Provisioner step provisions the target infrastructure.

Harness Terraform Infrastructure Provisioner are only supported in Canary and Multi-Service Workflows. For AMI deployments, Terraform Infrastructure Provisioner are also supported in Blue/Green Workflows.

In this topic:

Before You Begin

Ensure you have read the following topics before your add the Terraform Provisioner step to a Workflow:

You can also run Harness Terraform Infrastructure Provisioner using the Terraform Apply Workflow command. See Using the Terraform Apply Command.

What the difference between the Terraform Provisioner or Terraform Apply step? The Terraform Provisioner step is used to provision infrastructure and is added in the Pre-deployment Steps of a Workflow. The Terraform Apply can run any Harness Terraform Infrastructure Provisioner and can be placed anywhere in the Workflow.

In addition, the following related features are documented in other topics:

Visual Summary

This topic describes steps 3 through 6 in the Harness Terraform Provisioning implementation process:

For step 1, see Add Terraform Scripts. For step 2, see Map Terraform Infrastructure.

Here is illustration using a deployment:

  1. The Terraform Provision step executes pre-deployment to build the infrastructure.
  2. The Infrastructure Definition is used to select the provisioned nodes.
  3. The app is installed on the provisioned node.

Step 1: Add Environment to Workflow

Before creating or changing the Workflow settings to use a Terraform Infrastructure Provisioner, you need an Infrastructure Definition that uses it. Setting up this Infrastructure Definition is covered in Map Terraform Infrastructure.

Next, when you create or edit your Canary Workflow, you add the Environment containing the mapped Infrastructure Definition to your Workflow settings.

Harness Infrastructure Provisioners are only supported in Canary and Multi-Service deployment types. For AMI deployments, Infrastructure Provisioners are also supported in Blue/Green deployments. If you are creating a Blue/Green Workflow for AMI, you can select the Environment and Infrastructure Definition in the Workflow setup settings.

To create the Workflow and add the Environment, do the following:

  1. In your Harness Application, click Workflows.
  2. Click Add Workflow. The Workflow settings appear.
  3. Enter a name and description for the Workflow.
  4. In Workflow Type, select Canary.
  5. In Environment, select the Environment that has the Terraform Provisioner set up in one of its Infrastructure Definitions.
    You Workflow settings will look something like this:
  6. Click SUBMIT. The new Workflow is created.

By default, the Workflow includes a Pre-deployment Steps section. This is where you will add a step that uses your Terraform Provisioner.

Infrastructure Definitions are added in Canary Workflow Phases, in the Deployment Phases section. You will add the Infrastructure Definition that uses your Terraform Infrastructure Provisioner when you add the Canary Phases, later in this topic.

Step 2: Add Terraform Step to Pre-deployment Steps

To provision the infrastructure in your Terraform Infrastructure Provisioner, add the Terraform Provisioner Step in Pre-deployment Steps:

  1. In your Workflow, in Pre-deployment Steps, click Add Step.
  2. Select Terraform Provision. The Terraform Provision settings appear.

  1. In Name, enter a name for the step. Use a name that describes the infrastructure the step will provision.
  2. In Provisioner, select the Harness Terraform Infrastructure Provisioner you set up for provisioning your target infrastructure. This is covered in Add Terraform Scripts.
  3. In Timeout, enter how long Harness should wait to complete the Terraform Provisioner step before failing the Workflow.
The Inherit following configurations from Terraform Plan and Set as Terraform Plan settings are described in Perform a Terraform Dry Run.
  1. Click Next. The Input Values settings appear.

Step 3: Enter Input Values

The Input Values are the same variables from the Terraform Infrastructure Provisioner Variables section.

Select a value for each variable in Input Values. For encrypted text values, select an Encrypted Text secret from Harness Secrets Management.

For more information, see Secrets Management.

Use tfvar Files

The Input Values section also includes the Use tfvar files option for using a variable definitions file instead of using the variables from the Terraform Infrastructure Provisioner.

The path to the variable definitions file is relative to the root of the Git repo specified in the Terraform Provisioner setting. For example, in the following image, the testing.tfvars file is located in the repo at terraform/ec2/testing/testing.tfvars:

If Use tfvar files is selected and there are also Inline Values, when Harness loads the variables from the tfvars file, the Inline Values variables override the variables from the tfvars file.

If you only want to use the tfvars file, make sure to delete the Inline Values.

Map and List Variable Type Support

Terraform uses map variables as a lookup table from string keys to string values, and list variables for an ordered sequence of strings indexed by integers.

Harness provides support for both Terraform map and list as input values.

For example, here are map and list variables from a Terraform script:

variable "map_test" {
type = "map"
default = {
"foo" = "bar"
"baz" = "quz"
}
}

variable "list_test" {
type = "list"
default = ["ami-abc123", "ami-bcd234"]
}

In Inline Values, you would enter these as text values map_test and list_test with their defaults in Value:

When the Workflow is deployed, the map_test and list_test variables and values are added using the terraform plan -var option to set a variable in the Terraform configuration (see Usage from Terraform):

...
terraform plan -out=tfplan -input=false
...
-var='map_test={foo = "bar", baz = "qux"}'
-var='list_test=["ami-abc123", "ami-bcd234"]'

...

And displayed as outputs:

...
Outputs:

list_test = [
ami-abc123,
ami-bcd234
]
map_test = {
baz = qux
foo = bar
}
...
If the map or list you want to add is very large, such as over 128K, you might want to input them using the Use tfvar files setting and a values.tfvars file.
You can also create an expression in an earlier Workflow step that creates a map or list and enter the expression in Input Values. So long as the expression results in the properly formatted map or list value, it will be entered using terraform plan -var. See Set Workflow Variables.

Click Next. The Backend Configuration (Remote state) section appears.

Option 1: Backend Configuration (Remote state)

The Backend Configuration (Remote state) section contains the same remote state values set up in the Backend Configuration (Remote state) section of the Terraform Infrastructure Provisioner you selected.

Enter values for each backend config (remote state variable), and click Next. The Additional Settings section appears.

Option 2: Resource Targeting

In Additional Settings, you can use the Target setting to target one or more specific modules in your Terraform script, just like using the terraform plan -target command. See Resource Targeting from Terraform.

For example, in the following image you can see the Terraform script has one resource and two modules and the Targets setting displays them as potential targets.

If you have multiple modules in your script and you do not select one in Targets, all modules are used.

You can also use Workflow variables as your targets. For example, you can create a Workflow variable named module and then enter the variable ${workflow.variables.module} in the Targets field. When you deploy the Workflow, you are prompted to provide a value for the variable:

See Set Workflow Variables.

Option 3: Workspaces

Harness supports Terraform workspaces. A Terraform workspace is a logical representation of one your infrastructures, such as Dev, QA, Stage, Production.

Workspaces are useful when testing changes before moving to a production infrastructure. To test the changes, you create separate workspaces for Dev and Production.

A workspace is really a different state file. Each workspace isolates its state from other workspaces. For more information, see When to use Multiple Workspaces from Hashicorp.

Here is an example script where a local value names two workspaces, default and production, and associates different instance counts with each:

locals {
counts = {
"default"=1
"production"=3
}
}

resource "aws_instance" "my_service" {
ami="ami-7b4d7900"
instance_type="t2.micro"
count="${lookup(local.counts, terraform.workspace, 2)}"
tags {
Name = "${terraform.workspace}"
}
}

In the workspace interpolation sequence you can see the count is assigned by applying it to the workspace variable (terraform.workspace) and that the tag is applied using the variable also.

Harness will pass the workspace name you provide to the terraform.workspace variable, thus determining the count. If you provide the name production, the count will be 3.

In the Workspace setting, you can simply select the name of the workspace to use.

So can also use a Workflow variable to enter the name in Workspace.

Later, when the Workflow is deployed, you can specify the name for the Workflow variable:

This allows you to specify a different workspace name each time the Workflow is run.

You can even set a Harness Trigger where you can set the workspace name used by the Workflow:

This Trigger can then be run in response to different events, such as a Git push. For more information, see Passing Variables into Workflows and Pipelines from Triggers.

When rollbacks occur, Harness will rollback the Terraform state to the previous version of same workspace.

Option 4: Select Delegate

In Delegate Selector, you can select a specific Harness Delegate to execute the Terraform Provisioning step by selecting the Selector for the Delegate.

For more information on Delegate Selectors, see Delegate Installation and Management.

You can even add a Workflow variable for the Delegate Selector and the use an expression in the Delegate Selectors field. When you deploy the Workflow, you will provide the name of the Delegate Selector.

For more information, see Add Workflow Variables and Passing Variables into Workflows and Pipelines from Triggers.

Step 4: Add Infrastructure Definition to Phases

Now that the Workflow Pre-deployment section has your Terraform Provisioner step added, you need to add the target Infrastructure Definition where the Workflow will deploy.

This is the same Infrastructure Definition where you mapped your Terraform Infrastructure Provisioner outputs, as described in Map Terraform Infrastructure.

For Canary Workflows, Infrastructure Definitions are added in Phases, in the Deployment Phases section.

For AMI deployments, Terraform Infrastructure Provisioners are also supported in Blue/Green Workflows. If you are creating a Blue/Green Workflow for AMI, you can select the Environment and Infrastructure Definition in the Workflow setup settings.
  1. In the Deployment Phases section, click Add Phase. The Workflow Phase settings appear.
  2. In Service, select the Harness Service to deploy.
  3. In Infrastructure Definition, select the target Infrastructure Definition where the Workflow will deploy. This is the same Infrastructure Definition where you mapped your Terraform Infrastructure Provisioner outputs, as described in Map Terraform Infrastructure.
    Here is an example:
  4. Click Submit. Use the same Infrastructure Definition for the remaining phases in your Canary Workflow.

Once you are done, your Workflow is ready to deploy. Let's look at an example below.

Example: Terraform Deployment

This section describes the deployment steps for a Workflow using the Terraform Provisioner step and deploying to a provisioned AMI.

This is a Canary deployment Workflow, but we are only interested in Phase 1 where the Terraform provisioning occurs, and the artifact is installed in the provisioned AMI. Phase 2 of the Canary deployment is omitted.

In the Pre-Deployment section, the Terraform Provision step is executed. When you click the step you can see the Terraform command executed in Details.

Note the DNS name of the AMI in the dns output:

You will see this name used next.

In Phase 1 of the Canary deployment, click Select Nodes to see that Harness has selected the provisioned AMI as the deployment target host. See that it used the same DNS name as the the output in the Terraform Provision step:

Lastly, expand the Deploy Service step, and then click Install. You will see that the DNS name is shown on the arrow leading to install, and that the Details section displays the internal Delegate and provisioned target host addresses.

As you can see, the artifact was copied to the provisioned host. Deployment was a success.

Notes

The following notes discuss rollback of deployments that use Terraform Infrastructure Provisioners.

Deployment Rollback

If you have successfully deployed Terraform modules and on the next deployment there is an error that initiates a rollback, Harness will roll back the provisioned infrastructure to the previous, successful version of the Terraform state.

Harness will not increment the serial in the state, but perform a hard rollback to the exact version of the state provided.

Rollback Limitations

If you deployed two modules successfully already, module1 and module2, and then attempted to deploy module3, but failed, Harness will roll back to the successful state of module1 and module2.

However, let's look at the situation where module3 succeeds and now you have module1, module2, and module3 deployed. If the next deployment fails, the rollback will only roll back to the Terraform state with module3 deployed. Module1 and module2 were not in the previous Terraform state, so the rollback excludes them.

Next Steps

Now that you're familiar with provision using the Terraform Provisioner step, the following topics cover features to help you extend your Harness Terraform deployments:

  • Using the Terraform Apply Command — The Terraform Apply command allows you to use a Harness Terraform Infrastructure Provisioner at any point in a Workflow.
  • Perform a Terraform Dry Run — The Terraform Provisioner step in the Workflow can be executed as a dry run, just like running the terraform plan command. The dry run will refresh the state file and generate a plan.
  • Remove Provisioned Infra with Terraform Destroy — As a post-deployment step, you can add a Terraform Destroy step to remove the provisioned infrastructure, just like running the terraform destroy command.


How did we do?