Terraform Provisioner

Updated 1 month ago by Michael Cretzman

Harness has first-class support for HashiCorp Terraform as an infrastructure provisioner. This document contains the following information on using the Harness Terraform Infrastructure Provisioner in your Harness Application:

Harness Infrastructure Provisioners are only supported in Canary and Multi-Service deployment types. For AMI deployments, Infrastructure Provisioners are also supported in Blue/Green deployments.
This document discusses the Terraform Infrastructure Provisioner and Terraform Provision Workflow step. For information on using the Terraform Apply command in a Workflow, see Using the Terraform Apply Command.

Overview

When creating your Harness Application, there is an assumption that you already have an infrastructure in place where you want to deploy your services. In some cases, you will want to use an Infrastructure Provisioner to define this infrastructure on the fly.

You use a Terraform Infrastructure Provisioner in the following ways:

  • Terraform Infrastructure Provisioner - Add a Harness Terraform Provisioner as a blueprint for the infrastructure where your microservices are deployed. You add the provisioner script by connecting to a Git repo where the scripts are kept. You simply need to map some of the output variables in the script to the required fields in Harness. When Harness deploys your microservice, it will build your infrastructure according to this blueprint.

    Here is an example using an Infrastructure Definition, which is a feature-flagged replacement for Service Infrastructure. Outputs are mapped to the required fields:
    Here is an example using a Service Mapping in an Infrastructure Provisioner:
  • Service Infrastructure/Infrastructure Definition - Select the Terraform Provisioner in a Service Infrastructure/Infrastructure Definition in an Environment and then use this Service Infrastructure/Infrastructure Definition in any Workflow where you want to target the provisioned infrastructure.

    Here is an example using an Infrastructure Definition, which is a feature-flagged replacement for Service Infrastructure. You can see the Infrastructure Provisioner selected and the config.tf outputs mapped:
    Here is an example using a Service Infrastructure:
  • Workflow Step - Add a Terraform Provisioner step to a Workflow to build the infrastructure according to your Terraform script, and then deploy to the provisioned infrastructure you added to its Service Infrastructure/Infrastructure Definition.

The Harness Workflow types in which Terraform commands are available are Multi-Service and Canary. For AMI deployments, Blue/Green is supported. The Terraform Provision and Terraform Rollback commands are available in the Pre-deployment section and the Terraform Destroy command is available in the Post-deployment section.

Notes

  • Harness can provision any resource that is supported by a Terraform provider or plugin.
  • You do not need to deploy artifacts via Harness Services to use Terraform provisioning in a Workflow. You can simply set up a Terraform Provisioner and use it in a Workflow to provision infrastructure without deploying any artifact. In this guide, we include artifact deployment as it is the ultimate goal of Continuous Delivery.
  • Harness Service Instances (SIs) are not consumed and no additional licensing is required when a Harness Workflow uses Terraform to provision resources. When Harness deploys artifacts via Harness Services to the provisioned infrastructure in the same Workflow or Pipeline, SIs licensing is consumed.

Permissions

The permissions required for Harness to use your provisioner and successfully deploy to the provisioned instances depends on the deployment platform you use. The permissions are discussed in this topic in the configuration steps where they are applied, but, as a summary, you will need to manage the following permissions:

  • Delegate - The Harness Delegate will require permissions according to the deployment platform. It will use the access, secret, and SSH keys you configure in Harness Secrets Management to perform deployment operations. For ECS Delegates, you can add an IAM role to the ECS Delegate task definition. For more information, see Trust Relationships and Roles.
  • Cloud Provider - The Cloud Provider must have access permissions for the resources you are planning to create in the Terraform script. For some Harness Cloud Providers, you can use the installed Delegate and have the Cloud Provider assume the permissions used by the Delegate. For others, you can enter cloud platform account information.
The account used for the Cloud Provider will require platform-specific permissions for creating infrastructure. For example, to create EC2 AMIs the account requires the AmazonEC2FullAccess policy.
  • Git Repo - You will add the Git repo where the provisioner script is located to Harness as a Source Repo Provider. For more information, see Add Source Repo Providers.
  • Access and Secret Keys - These are set up in Harness Secrets Management and then used as variable values when you add a Provisioner step to a Workflow.
  • SSH Key - In order for the Delegate to copy artifacts to the provisioned instances, it will need an SSH key. You set this up in Harness Secrets Management and then reference it in the Harness Environment Service Infrastructure/Infrastructure Definition.
  • Platform Security Groups - Security groups are associated with EC2 and other cloud platform instances and provide security at the protocol and port access level. You will need to define security groups in your provisioner scripts and ensure that they allow the Delegate to connect to the provisioned instances.

Delegate and Cloud Provider Setup

This section describes the Harness Delegate and Cloud Provider setup for Terraform provisioning.

A Harness Delegate performs the Terraform provisioning in your Harness Workflow. You can specify that a specific Delegate perform the operation using the Delegate Tag setting in the Terraform Provision step in your Workflow. See Delegate Tag.

The Cloud Provider is used when deploying to the infrastructure provisioned by the Terraform Provision step in your Workflow. The Cloud Provider type is specified when you set up a Service Mapping when you configure a Service Infrastructure/Infrastructure Definition in a Harness Environment. See Service Mappings and Environment Setup.

There are many types of Delegates, such as Shell Script, ECS, and Kubernetes, and Cloud Providers can connect Harness to your deployment platform using other account methods. For more information, see Delegate Installation and Management and Add Cloud Providers.

Delegate and Terraform Setup

The Delegate should be installed where it can connect to the provisioned instances. Ideally, this is the same subnet, but if you are provisioning the subnet then you can put the Delegate in the same VPC and ensure that it can connect to the provisioned subnet using security groups.

To set up the Delegate, do the following:

  1. Install the Delegate on a host where it will have connectivity to your provisioned instances. To install a Delegate, follow the steps in Delegate Installation and Management. Once the Delegate is installed, it will be listed on the Harness Delegates page.
  2. If needed, add a Delegate Tag. Some Delegates, like a Kubernetes Cluster Delegate, do not need a Tag and can be referenced in a Kubernetes Cluster Cloud Provider by name. ECS and Shell Script Delegates require a Delegate Tag. For steps on installing a Delegate Tag, see Delegate Tags.
  3. Install Terraform on the Delegate. Terraform must be installed on the Delegate to use a Harness Terraform Provisioner. You can install Terraform manually or use a Delegate Profile. To use a Delegate Profile to install Terraform, do the following:
    1. In Harness, click Setup and then Harness Delegates.
    2. Click Manage Delegate Profiles, and then click Add Delegate Profile. The Manage Delegate Profile dialog appears.
    1. In Name, enter a name for the profile, such as Terraform Install and Setup.
    2. In Startup Script, enter the Terraform installation script you want to run when the profile is applied. Your script might vary depending on the instance type hosting the Delegate. For example:
    curl -O -L https://releases.hashicorp.com/terraform/0.11.13/terraform_0.11.13_linux_amd64.zip
    unzip terraform_0.11.13_linux_amd64.zip
    sudo mv terraform /usr/local/bin/
    terraform --version
    1. When you are finished, click SUBMIT.
    2. Locate your Delegate listing, and next to Profile, click the dropdown menu and select the Profile you created. In a few minutes, a timestamp link appears. Click the link to see the log.  

Terraform is now installed on the Delegate.

If you will be using a Cloud Provider that uses Delegate Tags to identify Delegates, add a Tag to this Delegate. For more information, see Delegate Tags. When you are done, the Delegate listing will look something like this:

The Delegate needs to be able to obtain the Terraform provider you specify in the modules in your Terraform script. For example, provider "acme". On the Delegate, Terraform will download and initialize any providers that are not already initialized.

Cloud Provider Setup

Harness uses Cloud Providers to connect to the different platforms where you provision infrastructure, such as a Kubernetes Cluster and AWS. When you create the Cloud Provider, you can enter the platform account information for the Cloud Provider to use as credentials, or you can use a Delegate running in the infrastructure to provide the credentials for the Cloud Provider.

If you are building an infrastructure on a platform that requires specific permissions, such as AWS AMIs, the account used by the Cloud Provider needs the required policies. For example, to create AWS EC2 AMIs, the account needs the AmazonEC2FullAccess policy. See the list of policies in Add Cloud Providers. For steps on adding an AWS Cloud Provider, see Amazon Web Services (AWS) Cloud.

When the Cloud Provider uses the installed Delegate for credentials (via its Delegate Tag), it assumes the permissions/roles used by the Delegate. For example, here is an AWS Cloud Provider assuming the IAM role from the Delegate installed in AWS.

Git Repo Setup

To use your Terraform script in Harness you host the script in a Git repo and add a Harness Source Repo Provider that connects Harness to the repo. For steps on adding the Source Repo Provider, see Add Source Repo Providers.

Here is an example of a Source Repo Provider and the GitHub repo it is using:

In the image above, there is no branch added in the Source Repo Provider Branch Name field as this a the master branch, and the ec2 folder in the repo is not entered in the Source Repo Provider. Later, when you use the Source Repo Provider in your Terraform Provisioner, you can specify the branch and root directory:

If you are using a private Git repo, an SSH key for the private repo is required on the Harness Delegate running Terraform to download the root module. You can copy the SSH key over to the Delegate. For more information, see Using SSH Keys for Cloning Modules (from HashiCorp) and Adding a new SSH key to your GitHub account (from Github).

Artifact Server Setup

There are no Artifact Server settings specific to setting up a Harness Infrastructure Provisioner. When you deploy, Harness performs an Artifact Check of the artifact associated with a Service after Harness provisions the infrastructure, and then later deploys the artifact to the provisioned infrastructure.

You will need to set up an Artifact Server to add an artifact to your Harness Service. For steps on setting up a Artifact Server, see Add Artifact Servers.

Application and Service Setup

Infrastructure Provisioners are added to a Harness Application and then incorporated into the Environment and Workflow components.

Any Harness Service can be deployed using the Infrastructure Provisioner. For the example in this article, we use a SSH type Service with a TAR File artifact type:

In the Service, we add a TAR file of Apache Tomcat from an AWS S3 bucket.

In this example, a Cloud Provider is used to associate the artifact instead of an Artifact Server because we are using AWS S3. AWS is always used as a Cloud Provider. If you want to use the same Cloud Provider for the Infrastructure Provisioner and artifact collection, ensure that the Delegate can connect to the artifact repo.

Terraform Provisioner Setup

This section will walk you through a detailed setup of a Terraform Provisioner for a deployment to a AWS EC2 VPC, and provide examples of the other supported platforms.

Harness supports first class Terraform provisioning for AWS-based infrastructures (SSH, ASG, ECS, Lambda) and Google Kubernetes (GKE).

For all of the supported platforms, setting up the Terraform Provisioner involves the following steps:

  1. Add your Terraform script via its Git Repo so Harness can pull the script.
  2. Map the relevant Terraform output variables from the script to the required Harness fields for the deployment platform (AWS, Kubernetes, etc).

To set up a Terraform Infrastructure Provisioner, do the following:

  1. In your Harness Application, click Infrastructure Provisioners.
  2. Click Add Infrastructure Provisioner, and then click Terraform. The Add Terraform Provisioner dialog appears.
  3. In Display Name, enter the name for this provisioner. You will use this name to select this provisioner in Harness Environments and Workflows.
  4. Click NEXT. The Script Repository section appears. This is where you provide the location of your Terraform script in your Git repo.
  5. In Script Repository, in Git Repository, select the Source Repo Provider you added for the Git repo where your script is located.
  6. In Git Repository Branch, enter the repo branch to use. For example, master. For master, you can also use a dot (.).
  7. In Terraform Configuration Root Directory, enter the folder where the script is located. Here is an example showing the Git repo on GitHub and the Script Repository settings:
  8. Click NEXT. The Variables section is displayed. This is where you will add the script input variables that must be given values when the script is run.
  9. In Variables, click Populate Variables. The Populate from Example dialog appears. Click SUBMIT to have the Harness Delegate use the Source Repo Provider you added to pull the variables from your script and populate the Variables section.
    If Harness cannot pull the variables from your script, check your settings and try again. Ensure that your Source Repo Provisioner is working by clicking its TEST button.

    Once Harness pulls in the variables from the script, it populates the Variables section.
    In the Type column for each variable, specify Text or Encrypted Text. When you add the provisioner to a Workflow, you will have to provide text values for Text variables, and select Harness Encrypted Text variables for Encrypted Text variables. This will be described later in this article, but you can read about Encrypted Text variables in Secrets Management.
  10. Click NEXT. The Backend Configuration (Remote state) section appears. This is an optional step.

    By default, Terraform uses the local backend to manage state, in a local Terraform language file named terraform.tfstate on the disk where you are running Terraform. With remote state, Terraform writes the state data to a persistent remote data store (such as an S3 bucket or HashiCorp Consul), which can then be shared between all members of a team. You can add the backend configs (remote state variables) for remote state to your Terraform Provisioner in Backend Configuration (Remote state).
  11. In Backend Configuration (Remote state), enter the backend configs from your script.
  12. Click NEXT. The Terraform Provisioner dialog will look something like this:
  13. Click SUBMIT. The Terraform Provisioner is created.

Next you will create Service Mappings to map specific script settings to the fields Harness requires for provisioning.

Script Repository Expression Support

When you specify the Script Repository for the Terraform Provisioner, you select the Source Repo Provider for the Git repo where your script is located, the Git repo branch to use, and root directory for the Terraform configuration file (config.tf).

You can also use expressions in the Git Repository Branch and Terraform Configuration Root Directory and have them replaced by Workflow variable values when the Terraform Provisioner is used by the Workflow. For example, a Workflow can have variables for branch and path:

In Script Repository, you can enter variables as ${workflow.variables.branch} and ${workflow.variables.path}:

You cannot use variables in the Script Repository fields to populate the Variables section. To populate the Variables section, click Populate from Example and enter in actual values.

When the Workflow is deployed, you are prompted to provide values for the Workflow variables, which are then applied to the Script Repository settings:

This allows the same Terraform Provisioner to be used by multiple Workflows, where each Workflow can use a different branch and path for the Script Repository.

Service Mappings

For Terraform provisioning, Harness supports first class Service Mappings for AWS-based infrastructures (SSH, ASG, ECS, Lambda) and Google Kubernetes (GKE).

Service Mappings are being replaced by Infrastructure Definitions. With Service Mappings, you map Terraform script outputs as part of the Harness Infrastructure Provisioner. With Infrastructure Definitions, you map Terraform script outputs as part of the Infrastructure Definition in a Harness Environment.

If you have been running your deployments manually, you might not have outputs configured in your template files. To configure Service Mappings, you will need to add these output variables to your template.

To create a Service Mapping, do the following:

  1. In your Terraform Provisioner, in Service Mappings, click Add Service Mapping. The Service Mapping dialog appears.
    The mapping required by Harness depends on the type of Harness Service you are deploying and the Cloud Provider, and hence deployment platform, you are using for deployment. In the following steps, we will map a TAR type Service with an AWS Cloud Provider.
  2. In Service, enter the Harness Service you are deploying using the provisioner.
  3. In Deployment Type, select the type of deployment, such as SSH, ECS, or Kubernetes.
  4. In Cloud Provider Type, select the type of Cloud Provider you set up to connect to your deployment environment.
  5. Click NEXT.
  6. In Configuration, map the required fields to your Terraform script outputs. The following sections provide examples for the common deployment types. When you are finished, click NEXT and then SUBMIT. The Service Mapping is added to the Terraform Provisioner and you can move on to using the provisioner in an Environment.

You map the Terraform script outputs using this syntax, where exact_name is the name of the output:

${terrafrom.exact_name}
When you map a Terraform script output to a Harness field as part of a Service Mapping, the variable for the output, ${terrafrom.exact_name​}, can be used anywhere in the Workflow that uses that Terraform Provisioner.
SSH

The Secure Shell (SSH) deployment type requires the Region and Tags fields. The following example shows the Terraform script outputs used for the mandatory SSH deployment type fields:

ECS

The ECS deployment type requires the Region and Cluster fields. The following example shows the Terraform script outputs used for the mandatory ECS deployment type fields:

For information on ECS deployments, see AWS Elastic Container Service (ECS) Deployment.

Kubernetes

The Kubernetes deployment type requires the Cluster Name and Namespace fields.

The Kubernetes deployment type is supported for Google Cloud Platform and Azure Cloud Providers, but not for Kubernetes Cluster Cloud Providers.

The following example shows the Terraform script outputs used for the mandatory Kubernetes deployment type fields:

For information on Kubernetes deployments, see Kubernetes Deployments Overview.

AMI and AutoScaling Group
AMI deployments are the only type that supports Terraform and CloudFormation Infrastructure Provisioners in Blue/Green deployments.

The AWS AutoScaling Group deployment type requires the Region and Base Auto Scaling Group fields. The following example shows the Terraform script outputs used for all of the fields:

For detailed information on AMI deployments, see AMI Basic Deployment. Here is what each of the output values are:

  • Region - The target AWS region for the AMI deployment.
  • Base Auto Scaling Group - An existing Auto Scale Group that Harness will copy to create a new Auto Scaling Group for deployment by an AMI Workflow. The new Auto Scaling Group deployed by the AMI Workflow will have unique max and min instances and desired count.
  • Target Groups - The target group for the load balancer that will support your Auto Scale Group. The target group is used to route requests to the Auto Scale Groups you deploy. If you do not select a target group, your deployment will not fail, but there will be no way to reach the Auto Scale Group.
  • Classic Load Balancers - A classic load balancer for the Auto Scale Group you will deploy.
  • For Blue/Green Deployments only:
    • Stage Classic Load Balancers - A classic load balancer for the stage Auto Scale Group you will deploy.
    • Stage Target Groups - The staging target group to use for Blue Green deployments. The staging target group is used for initial deployment of the Auto Scale Group and, once successful, the Auto Scale Group is registered with the production target group (Target Groups selected above).

Lambda

The Lambda deployment type requires the IAM Role and Region fields. The following example shows the Terraform script outputs used for the mandatory and optional Lambda deployment type fields:

Environment Setup

A Harness Environment represents your production and non-production deployment environments and contains Service Infrastructure/Infrastructure Definition settings to specify a deployment infrastructure, using a Cloud Provider, a deployment type (ECS, Kubernetes etc), and the specific infrastructure details for the deployment, like VPC settings.

Typically, when you add an Environment, you specify the Service Infrastructure/Infrastructure Definition for an existing infrastructure. To use your Terraform Provisioner, you add the Terraform Provisioner to the Service Infrastructure/Infrastructure Definition to identify a dynamically provisioned infrastructure that will exist.

The following image shows two Service Infrastructures for an Environment. One infrastructure is for an already-provisioned infrastructure and one is using an infrastructure dynamically provisioned.

Here is an Infrastructure Definition example:

Later, when you create a Workflow, you will use a Terraform Provisioner step to provision the infrastructure. During deployment, the Terraform Provisioner step will provision the infrastructure and then the Workflow will deploy to it via the Environment Service Infrastructure/Infrastructure Definition.

To use a Terraform Provisioner in an Environment Service Infrastructure/Infrastructure Definition, do the following:

  1. In your Harness Application, click Environments. The Environments page appears.
  2. Click Add Environment. The Environment dialog appears.
  3. In the Environment dialog, enter a Name and select an Environment Type, such as Non-Production. When you are done, it will look something like this:
  4. Click SUBMIT. The new Environment appears. Next, you will add a Service Infrastructure/Infrastructure Definition using the Terraform Infrastructure Provisioner.

Infrastructure Definition

Infrastructure Definitions are a feature-flagged replacement for Service Infrastructure.

Infrastructure Definitions are replacing Service Mappings. With Service Mappings, you map Terraform script outputs as part of the Harness Infrastructure Provisioner. With Infrastructure Definitions, you map Terraform script outputs as part of the Infrastructure Definition in a Harness Environment.

To add the Infrastructure Provisioner to the Infrastructure Definition, do the following:

  1. Click Infrastructure Definition. The Infrastructure Definition dialog appears.
  2. In Name, enter the name for the Infrastructure Definition. You will use this name to select the Infrastructure Definition when you set up Workflows and Workflow Phases.
  3. In Cloud Provider Type, select the type of Cloud Provider to use to connect to the target platform, such as Amazon Web Services, Kubernetes Cluster, etc.
  4. In Deployment Type, select the same type of deployment as the Services you plan to deploy to this infrastructure. It is Deployment Type that determines which Services can be scoped in Scope to specific Services and in Workflow and Phase setup.
  5. Click Map Dynamically Provisioned Infrastructure.
  6. In Provisioner, select your Terraform Infrastructure Provisioner.
  7. In Host Name Convention (if listed), enter a naming convention or use the default Harness expression.
  8. In the remaining settings, map the required fields to your Terraform script outputs.
    You map the Terraform script outputs using this syntax, where exact_name is the name of the output:
    ${terrafrom.exact_name}
    When you map a Terraform script output to a Harness field as part of a Service Mapping, the variable for the output, ${terrafrom.exact_name​}, can be used anywhere in the Workflow that uses that Terraform Provisioner.
    SSH
    The Secure Shell (SSH) deployment type requires the Region and Tags fields. The following example shows the Terraform script outputs used for the mandatory SSH deployment type fields:
    ECS
    The ECS deployment type requires the Region and Cluster fields. The following example shows the Terraform script outputs used for the mandatory ECS deployment type fields:
    For information on ECS deployments, see AWS Elastic Container Service (ECS) Deployment.
    Kubernetes
    The Kubernetes deployment type requires the Cluster Name and Namespace fields.
    The Kubernetes deployment type is supported for Google Cloud Platform and Azure Cloud Providers, but not for Kubernetes Cluster Cloud Providers.
    The following example shows the Terraform script outputs used for the mandatory Kubernetes deployment type fields:
    For information on Kubernetes deployments, see Kubernetes Deployments Overview.
    AMI and AutoScaling Group
    AMI deployments are the only type that supports Terraform and CloudFormation Infrastructure Provisioners in Blue/Green deployments.
    The AWS AutoScaling Group deployment type requires the Region and Base Auto Scaling Group fields. The following example shows the Terraform script outputs used for all of the fields:
    For detailed information on AMI deployments, see AMI Basic Deployment. Here is what each of the output values are:
    • Region - The target AWS region for the AMI deployment.
    • Base Auto Scaling Group - An existing Auto Scale Group that Harness will copy to create a new Auto Scaling Group for deployment by an AMI Workflow. The new Auto Scaling Group deployed by the AMI Workflow will have unique max and min instances and desired count.
    • Target Groups - The target group for the load balancer that will support your Auto Scale Group. The target group is used to route requests to the Auto Scale Groups you deploy. If you do not select a target group, your deployment will not fail, but there will be no way to reach the Auto Scale Group.
    • Classic Load Balancers - A classic load balancer for the Auto Scale Group you will deploy.
    • For Blue/Green Deployments only:
      • Stage Classic Load Balancers - A classic load balancer for the stage Auto Scale Group you will deploy.
      • Stage Target Groups - The staging target group to use for Blue Green deployments. The staging target group is used for initial deployment of the Auto Scale Group and, once successful, the Auto Scale Group is registered with the production target group (Target Groups selected above).
    Lambda
    The Lambda deployment type requires the IAM Role and Region fields. The following example shows the Terraform script outputs used for the mandatory and optional Lambda deployment type fields:

Service Infrastructure

Infrastructure Definitions are a feature-flagged replacement for Service Infrastructures. Service Infrastructures will be removed in the future.

To add the Infrastructure Provisioner to the Service Infrastructure, do the following:

  1. Click Add Service Infrastructure. The Service Infrastructure dialog appears.
  2. For the Select Cloud Provider section, enter the same Service and Cloud Provider you are using in the Terraform Infrastructure Provisioner.
  3. Click Next. The Configuration settings appear.
  4. In Provision Type, select Dynamically Provisioned.
  5. In Provisioner, select your Terraform Infrastructure Provisioner. If you do not see it listed, then the Select Cloud Provider section is not using the same Service or Cloud Provider as your Terraform Infrastructure Provisioner.
  6. In Connection Attributes, select the SSH credentials to use when connecting to the provisioned instance.

    For example, if your Terraform Provisioner will create an AMI, you need to have a key_name argument in your script and a SSH secret in Harness using the same pem file associated with the key_name value. The following image shows how the SSH secret in Harness Secrets Management is used in Connection Attributes and how the Terraform script includes the key_name argument that uses the same key as the pem file.
    For information on creating SSH credentials, see Secrets Management.
  7. In Host Name Convention, enter a naming convention or use the default Harness expression.
  8. Select Use Public DNS for connection if you want to the Harness Delegate to use DNS to resolve the hostname. This is a common scenario. If you use DNS, ensure that your instances will have a public IP registered. For example, the following Terraform script includes the associate_public_ip_address argument set to true:
    In AWS, you can can control whether your instance receives a public IP address in a number of ways. For more information, see Public IPv4 Addresses and External DNS Hostnames from AWS.

    When you are done, the dialog will look something like the following Service Infrastructure for a SSH deployment:

Now that you have set up a Service Infrastructure, you can use it in a Workflow along with a Terraform Provisioner step that provisions the infrastructure.

Workflow Setup

Once you have a Terraform Provisioner and an Environment Service Infrastructure/Infrastructure Definition that uses it, you can use the Terraform Provisioner in a Workflow's pre-deployment steps. The Terraform Provisioner step will provision the infrastructure and then the Workflow will deploy to it via the Environment Service Infrastructure/Infrastructure Definition.

The following image shows how the Workflow uses the Terraform Provisioner step to create the infrastructure, then selects the node(s) in the infrastructure for deployment, and finally deploys to AWS EC2.

Terraform Options

Before creating the Workflow, here are a few features to be aware of:

  • Terraform Dry Run - The Terraform Provisioner step in the Workflow can be executed as a dry run, just like running the terraform plan command. The dry run will refresh the state file and generate a plan.
  • Terraform Target Support - You can target one or more specific modules in your Terraform script, just like using the terraform plan -target command. See Resource Targeting from Terraform. You can also use Workflow variables as your targets.
  • Using tfvars Files - You can use the input variables from the Terraform script in your Terraform Provisioner or use a tfvars file in the same repository, just like using the terraform apply -var-file command. See Variable Definitions (.tfvars) Files from Terraform.
  • Terraform Destroy - As a post-deployment step, you can add a Terraform Destroy step to remove the provisioned infrastructure, just like running the terraform destroy command. See destroy from Terraform.

All of these features are discussed in detail below.

Workflow Creation

To use a Terraform Provisioner in your Workflow, do the following:

  1. In your Harness Application, click Workflows.
  2. Click Add Workflow. The Workflow dialog appears.
  3. Enter a name and description for the Workflow.
  4. In Workflow Type, select Canary.
Harness Infrastructure Provisioners are only supported in Canary and Multi-Service deployment types. For AMI deployments, Infrastructure Provisioners are also supported in Blue/Green deployments.
  1. In Environment, select the Environment that has the Terraform Provisioner set up in one of its Service Infrastructures/Infrastructure Definitions.
  2. Click SUBMIT. The new Workflows is created.

By default, the Workflow includes a Pre-deployment Steps section. This is where you will add a step that uses your Terraform Provisioner.

Terraform Provisioner Step

To add the Terraform Provisioner Step, do the following:

  1. In your Workflow, in Pre-deployment Steps, click Add Step. A dialog containing the available steps appears.

  1. In Provisioners, click Terraform Provision. The Terraform Provision dialog appears.

  1. In Provisioner, select a Harness Terraform Provisioner.
  2. In Timeout, enter how long Harness should wait to apply the Terraform Provisioner before failing the Workflow.
The Inherit following configurations from dry run setting is described in Terraform Dry Run.
  1. Click NEXT. The Input Values settings appear.
Input Values

The Input Values are the same variables from the Terraform Provisioner Variables section.

Select a value for each variable in Input Values. For encrypted text values, select a Encrypted Text secret from Harness Secrets Management.

For more information, see Secrets Management.

The Input Values section also includes the Use tfvar files option for using a variable definitions file instead of using the variables from the Terraform Provisioner. The path to the variable definitions file is relative to the root of the Git repo specified in the Terraform Provisioner setting. For example, in the following image, the testing.tfvars file is located in the repo at terraform/ec2/testing/testing.tfvars:

If Use tfvar files is selected and there are also Inline Values, when Harness loads the variables from the tfvars file, the Inline Values variables override the variables from the tfvars file.

Click NEXT. The Backend Configuration (Remote state) section appears.

Backend Configuration (Remote state)

The Backend Configuration (Remote state) section contains the same remote state values set up in the Backend Configuration (Remote state) section of the Terraform Provisioner you selected. See the Terraform Provisioner Setup section.

Enter values for each backend config (remote state variable), and click NEXT. The Targets section appears.

Targets

In Additional Settings, you can use the Target setting to target one or more specific modules in your Terraform script, just like using the terraform plan -target command. See Resource Targeting from Terraform.

For example, in the following image you can see the Terraform script has one resource and two modules and the Targets setting displays them as potential targets.

If you have multiple modules in your script and you do not select one in Targets, all modules are used.

You can also use Workflow variables as your targets. For example, you can create a Workflow variable named module and then enter the variable ${workflow.variables.module} in the Targets field. When you deploy the Workflow, you are prompted to provide a value for the variable:

Workspace

Harness supports Terraform workspaces. A Terraform workspace is a logical representation of one your infrastructures, such as Dev, QA, Stage, Production.

Workspaces are useful when testing changes before moving to a production infrastructure. To test the changes, you create separate workspaces for Dev and Production.

A workspace is really different state file. Each workspace isolates its state from other workspaces. For more information, see When to use Multiple Workspaces from Hashicorp.

Here is an example script where a local value names two workspaces, default and production, and associates different instance counts with each:

locals {
counts = {
"default"=1
"production"=3
}
}

resource "aws_instance" "my_service" {
ami="ami-7b4d7900"
instance_type="t2.micro"
count="${lookup(local.counts, terraform.workspace, 2)}"
tags {
Name = "${terraform.workspace}"
}
}

In the workspace interpolation sequence you can see the count is assigned by applying it to the workspace variable (terraform.workspace) and that the tag is applied using the variable also.

Harness will pass the workspace name of you provide to the terraform.workspace variable, thus determining the count. If you provide the name production, the count will be 3.

In the Workspace field, you can simply enter the name of the workspace to use.

Or you can use a Workflow variable to enter the name in Workspace.

Later, when the Workflow is deployed, you can specify the name for the Workflow variable:

This allows you to specify a different workspace name each time the Workflow is run.

You can even set a Harness Trigger where you can set the workspace name used by the Workflow:

This Trigger can then be run in response to different events, such as a Git push. For more information, see Passing Variables into Workflows and Pipelines from Triggers.

When rollbacks occur, Harness will rollback the Terraform state to the previous version of same workspace.
Delegate Tag

You can select a specific Harness Delegate to execute the Terraform Provisioning step by selecting the Tag for the Delegate in Delegate Tag. For more information on Delegate Tags, see Delegate Tags.

Click NEXT. The Terraform Provision step is added to the Workflow.

You can even add a Workflow variable for the Delegate Tag and the use an expression in the Delegate Tag field. When you deploy the Workflow, you will provide the name of the Delegate Tag:

For more information, see Add Workflow Variables and Passing Variables into Workflows and Pipelines from Triggers.

Terraform Dry Run

The Terraform Provision step in the Workflow can be executed as a dry run, just like running the terraform plan command. The dry run will refresh the state file and generate a plan but it is not applied. You can then set up an Approval step to follow the dry run, followed by the Terraform Provision step to apply the plan.

To set up Terraform dry run in your Workflow, do the following:

  1. Add your Terraform Provision step, but select the Set as dry run option.

  1. To add the Approval step, click Add Step, and select Approval.

  1. In the Approval step, select whatever approval options you want, and then click SUBMIT.
  2. Next, click Add Step and select Terraform Provision.
  3. In Terraform Provision, select your Terraform Provisioner, and then select Inherit following configurations from dry run. All of the remaining settings are disabled because they are inherited from the Dry Run step.
  4. Click SUBMIT.

Your Workflow now performs a dry run, asks for an approval, and then, once approved, applies the Terraform script.

You can rename the Terraform Provision_2 step to Terraform Apply or some other name that shows that it is applying the Dry Run step.

If the Approval step takes a long time to be approved there is the possibility that a new commit occurs in the Git repo containing for Terraform script. To avoid a problem, when the Workflow performs the dry run, it saves the commit ID of the script file. Later, after the approval, the Terraform Provision step will use the commit ID to ensure that it executes the script that was dry run.

Terraform Destroy

In the Post-deployment Steps of the Workflow, you can add a Terraform Destroy step to remove any provisioned infrastructure, just like running the terraform destroy command. See destroy from Terraform.

  1. To add the Terraform Destroy step, in the Post-deployment Steps of the Workflow, click Add Step.
  2. In the step list, click Terraform Destroy.

The Terraform Destroy dialog appears.

  1. Select the Terraform Provisioner and Workspace that was used to provision the infrastructure you want to destroy. For example, the Terraform Provisioner and Workspace used in the Pre-deployment Steps.
  2. In Delegate Tag, enter the Delegate Tag(s) for the Delegate that you want to execute this step. Typically, this is the same Tag used to select a Delegate in the Terraform Provision step.
  3. Click SUBMIT. The Terraform Destroy step is added to the Workflow.

Using Output Variables in Workflow Commands

The variables you use to map Terraform script outputs in an Infrastructure Definition or Service Mappings can also be used in other Workflow commands.

For example, if you use ${terraform.clusterName} to map a cluster name output to the cluster name in an Infrastructure Definition or Service Mappings, you can add a Shell Script step in your Workflow and use echo ${terraform.clusterName} to print the value.

For example, you can see the Terraform log display the output clusterName = us-central1-a/harness-test in the following Terraform Provision step:

Next, you could add a Shell Script step that uses the Terraform output variable ${terraform.clusterName}:

In the Shell Script step in the deployment, you can see the value us-central1-a/harness-test printed:

Deployment Example

This section describes the deployment steps for a Workflow using the Terraform Provision step and deploying to a provisioned AMI.

This is a Canary deployment Workflow, but we are only interested in Phase 1 where the Terraform provisioning occurs, and the artifact is installed in the provisioned AMI. Phase 2 of the Canary deployment is omitted.

In the Pre-Deployment section, the Terraform Provision step is executed. When you click the step you can see the Terraform command executed in Details.

Note the DNS name of the AMI in the dns output:

You will see this name used next.

In Phase 1 of the Canary deployment, click Select Nodes to see that Harness has selected the provisioned AMI as the deployment target host. See that it used the same DNS name as the the output in the Terraform Provision step:

Lastly, expand the Deploy Service step, and then click Install. You will see that the DNS name is shown on the arrow leading to install, and that the Details section displays the internal Delegate and provisioned target host addresses.

As you can see, the artifact was copied to the provisioned host. Deployment was a success.

Deployment Rollback

If you have successfully deployed Terraform modules and on the next deployment there is an error that initiates a rollback, Harness will roll back the provisioned infrastructure to the previous, successful version of the Terraform state. Harness will not increment the serial in the state, but perform a hard rollback to the exact version of the state provided.

Rollback Limitations

If you deployed two modules successfully already, module1 and module2, and then attempted to deploy module3, but failed, Harness will roll back to the successful state of module1 and module2.

However, let's look at the situation where module3 succeeds and now you have module1, module2, and module3 deployed. If the next deployment fails the rollback will only roll back to the Terraform state with module3 deployed. Module1 and module2 were not in the previous Terraform state, so the rollback excludes them.

Troubleshooting

This section lists some of the common exceptions that can occur and provides steps to fix the errors.

Delegate Could Not Reach the Resource

In order for the Delegate to reach the provisioned instances it must have connectivity and permission to SSH into the instance and perform deployment tasks. Depending on the platform where you deploy, this can involve different settings.

For example, the following Terraform script provisions an AMI to AWS EC2 and contains the same subnet (subnet_id) and security group (security_groups) of the Delegate, and it includes the boolean associate_public_ip_address setting to ensure that the Delegate can resolve the AMI's name.

resource "aws_instance" "tf_instance" {
subnet_id = "subnet-05788710b1b06b6b1"
security_groups = ["sg-05e7b8b9cad94b393"]
key_name = "doc-delegate1"
ami = "ami-0080e4c5bc078760e"
instance_type = "t2.micro"
associate_public_ip_address = "true"
tags {
Name = "${var.tag}"
}
}

In this case, the Harness Environment Service Infrastructure/Infrastructure Definition will also have the Use Public DNS for connection option selected.

Key Changed

The variables populated in the Infrastructure Provisioner Variables section are assigned a type.

If the type changes from Encrypted Text to Text, which can happen if you re-populate the variable, you might receive the following error on deployment:

Invalid request: The type of variable access_key has changed. 
Please correct it in the workflow step.

Simply correct the variable type in Variables.

Variable type is also used when you add a provisioner step to the Workflow.

You should ensure that the type selected in the Provisioner Variables section is preserved in the Workflow provisioner Input Values step.

Provisioner mapping region was not resolved

If you have mapped provisioner settings in Service Mappings but your provisioner configuration does not have outputs for those variables, you will receive the following error:

Invalid request: The infrastructure provisioner mapping region 
was not resolved from the provisioner outputs

Error Validating Provider Credentials

If you are using access or secret keys in your provisioner configuration and AWS is unable to validate them, you will receive the following error:

provider.aws: error validating provider credentials: 
error calling sts:GetCallerIdentity:
InvalidClientTokenId: The security token included in the request is invalid.

Ensure that your access or secret keys are valid. If you are suing Harness Encrypted Text secrets to manage the keys, then update the Encrypted Text secrets with the correct values. For more information, see Secrets Management.


How did we do?