Using the Shell Script Command

Updated 1 month ago by Michael Cretzman

One of the steps you can include in a Harness Workflow is a Shell Script command step.

With the Shell Script command, you can execute scripts in the shell session of the Workflow in the following ways:

  • Execute bash or PowerShell scripts on the host running a Harness Delegate. You can use Tags to identify which Harness Delegate to use.
  • Execute bash or PowerShell scripts on a remote host in the deployment Service Infrastructure/Infrastructure Definition.

When executing a script, you can also dynamically capture the execution output from the script, providing runtime variables based on the script execution context, and export those to another step in the same workflow or another workflow in the same pipeline.

For example, you could use the Shell Script step to capture instance IDs in the deployment environment and then pass those IDs downstream to future workflow steps or phases, or even to other workflows executed in the same pipeline.

If you do not publish the output variables, you can still identify which ones you want to be displayed in the deployment details and logs.

What Information is Available to Capture?

Any information in the particular shell session of the workflow can be set, captured and exported using one or more Shell Script steps in that workflow. In addition, you can set and capture information available using the built-in Harness variables. For more information, see Variables and Expressions in Harness.

A good example of information you can capture and export is the Harness variable ${}, which gives you the name of the target host on which this script is executed at runtime.

Capturing and exporting script output in the Shell Script step can be very powerful. For example, a Harness trigger could pass in a variable to a workflow (like a Git commitID), the Shell Step could use that value and info from its session in a complex function, and then export the output down the pipeline for further evaluation.

Intended Audience

  • Developers
  • DevOps

Before You Begin

Using Shell Script Step

The following procedure provides a simple demonstration of how to create a bash script in a Shell Script step in a workflow and publish its output in a variable.

When the script in the Shell Script command is run, Harness executes the script on the target host's operating system. Consequently, the behavior of the script depends on that host's system settings. For this reason, you might wish to begin your script with a shebang line that identifies the shell language, such as #!/bin/sh (shell), #!/bin/bash (bash), or #!/bin/dash (dash). For more information, see the Bash manual from the GNU project.

To capture the shell script output in a variable, do the following:

  1. In a Harness application, open a workflow. For this example, we will use a Build workflow.
  2. In a workflow section, click Add Command. The Add Command dialog opens.
  3. In Add Command, in Others, click Shell Script. The Shell Script dialog appears.
  4. In Script Type, select BASH or POWERSHELL. In this example, we will use BASH.
  5. In Script. enter a bash or PowerShell script. In this example, we will use a script that includes an export. For example, export the variable names BUILD_NOand LANG:

    export BUILD_NO="345"
    export LANG="en-us"

    For PowerShell, you could set an environment variable using $Env:

    You must use quotes around the value because environment variables are Strings.
  6. In Script Output, enter the list of variables you want to use. In our example, we would enter BUILD_NO,LANG.
  7. Where to execute the script? If you wish to execute the script on the host running the Harness Delegate, enable Execute on Delegate. Often, you will want to execute the script on a target host. If so, ensure Execute on Delegate is disabled.

    If the Shell Script is executed on a target host (Execute on Delegate is disabled), then the Delegate that can reach the target host is used.

    For Kubernetes Workflows: If the Shell Script is executed on the Delegate (Execute on Delegate is enabled), then Harness checks to see if kubectl is being used. If kubectl is being used, Harness checks that the Delegate can reach the Kubernetes cluster. If kubectl is not being used, any Delegate is used for the script.
  8. In Target Host, enter the IP address or hostname of the remote host where you want to execute the script. The target host must be in the Service Infrastructure/Infrastructure Definition selected when you created the workflow, and the Harness Delegate must have network access to the target host. You can also enter the variable ${} and the script will execute on whichever target host is used during deployment.
  9. If you selected BASH in Script Type, Connection Type will contain SSH. If you selected POWERSHELL in Script Type, Connection Type will contain WINRM.
  10. In SSH Connection Attribute (or WinRM Connection Attribute), select the execution credentials to use for the shell session. For information on setting up execution credentials, see Using Execution Credentials.
  11. In Working Directory, specify the folder where the script is executed on the remote host, for example /tmp for Linux or %TEMP% for Windows.
  12. In Tags, enter the tag(s) of the Delegate you want to use. You add Tags to Delegates in order to ensure that they are used to execute the command. For more information, see Delegate Tags.
    Tagging can be used whether Execute on Delegate is enabled or not. The Shell Script command honors the Tag and executes the SSH connection to the specified target host via the tagged delegate.

    An example where tagging might be useful when Execute on Delegate is disabled is when you specify an IP address in Target Host, but you have 2 VPCs with the same subnet and duplicate IP numbers exist in both. Using Tags, you can scope the the shell session towards the delegate in a specific VPC.

    You must enter all of the Tags that the Delegate you are targeting uses. If the Delegate has three Tags, and you only enter one in Tags, that Delegate might not be used.
  13. To export the output variable(s) you entered in Script Output earlier, enable Publish output in the context. If you do not enable this, the variables you entered in Script Output will still be displayed in the deployment details and logs for the workflow.
  14. In Variable Name, enter a unique parent name for all of the output variables. You will use this name to reference the variable elsewhere. For example, if the Variable Name is region, you would reference BUILD_NO with ${context.region.BUILD_NO} or ${region.BUILD_NO}.
  15. In Scope, select Pipeline, Workflow, or Phase. The output variables are available within the scope you set here.
    The scope you select is useful for preventing variable name conflicts. You might use a workflow with published variables in multiple pipelines, so scoping the variable to Workflow will prevent conflicts with other workflows in the pipeline.

    Here is an example of a complete Shell Script command:
  16. Click SUBMIT. The Shell Script is added to your workflow.

Next, use the output variables you defined in another command in your phase, workflow, or pipeline, as described below.

Use Published Output Variables

The following procedure demonstrates how to use the output variables you captured and published in the Shell Script command above.

Remember that where you can reference your published output variables depends on the scope you set in Scope in the Shell Script command.

To use published output variables, do the following:

  1. In your Harness workflow, add a new command. For this example, we will use a HTTP command.

    Click Add Command. In the Add Command dialog, click HTTP. The HTTP command dialog opens.
  2. In URL, enter a URL that references the Publish Variable Name region and Script Output variable names BUILD_NO and LANG you published in the Shell Script command. For example, here is a search using the variables:${context.region.BUILD_NO}&${context.region.LANG}

    Note the use of context is optional.
  3. Fill out the rest of the HTTP dialog and click SUBMIT.

When you deploy your workflow, you will see both the Shell Script and HTTP steps using the output variables.

In the log for the Shell Script step, you can see the output variables:

INFO   2018-10-22 14:03:36    Executing command ...
INFO 2018-10-22 14:03:37 Script output:
INFO 2018-10-22 14:03:37 BUILD_NO=345
INFO 2018-10-22 14:03:37 LANG=en-us
INFO 2018-10-22 14:03:37 Command completed with ExitCode (0)

In the log for the HTTP step, you see that the published variables that were used to create this URL${context.region.BUILD_NO}&${context.region.LANG}

Are now substituted with the output variable values to form the final URL

Reserved Keywords

The word var is a reserved word for Output and Publish Variable names in the Shell Script step.

If you must use var, you can use single quotes and get() when referencing the published output variable.

Instead of using ${test.var} use ${test.get('var')}.

Stopping Scripts After Failures

The Shell Script command will continue to process through the script even if a script step fails. To prevent this, you can simply include instructions to stop on failure in your script. For example:

  • set -e - Exit immediately when a command fails.
  • set -o pipefail - Sets the exit code of a pipeline to that of the rightmost command to exit with a non-zero status, or to a zero status if all commands of the pipeline exit successfully.
  • set -u - Treat unset variables as an error and exit immediately.

For more information, see this article: Writing Robust Bash Shell Scripts.

How did we do?