Service Types and Artifact Sources

Updated 3 days ago by Michael Cretzman

This article provides a matrix of Harness Service artifact types and their artifact sources, displaying which types support metadata-only sources, and which types support both metadata and file sources. It also includes information on how to manage the different source types to copy, download, and install artifacts.

In this topic:

Metadata and File Artifact Sources

Harness Services support both metadata and file Artifact Sources. When you add an Artifact Source to a Harness Service, you specify whether to use metadata and/or file. In many cases, only metadata is supported. File-based Artifact Sources are supported for Harness Service deployment types that use files, such as JAR, ZIP, etc.

Here is a Harness Service with a Secure Shell (SSH) deployment type:

In Artifact Type, you can see that file-based artifacts are supported.

Here is a Service with the Pivotal Cloud Foundry deployment type, supporting Artifact Sources that might or might not support file-based artifacts for PCF:

For nearly all deployments, metadata is sufficient as it contains enough information for the target host(s) to obtain or build the artifact. In cases where specific files are needed, a file-based Artifact Source can be used.

When Does Harness Store the Binary?

Typically, only metadata is supported. Files are supported for some types to support cases where CI systems such as Jenkins or Bamboo are not accessible to the production environment. In that case, Harness becomes common layer for the orchestration.

Artifact Sources and Artifact Type Matrix

The following table lists the artifact types, such as Docker Image and AMI, and whether their Artifact Source supports files or metadata, files, or both.

Legend:

  • M - Metadata-only. This includes Docker image and registry information. For AMI, this means AMI ID-only.
  • MF - Metadata or file.
  • Blank - Not supported.

Sources

Docker Image

(Kubernetes/Helm)

AWS AMI

AWS CodeDeploy

AWS Lambda

JAR

RPM

TAR

WAR

ZIP

PCF

IIS

Jenkins

MF

MF

MF

MF

MF

MF

MF

MF

Docker Registry

M

Amazon S3

M

M

M

M

M

M

M

M

M

Amazon AMI

M

Elastic Container Registry (ECR)

M

Azure Reg

M

Azure DevOps Artifact

M

M

M

M

M

M

M

M

GCS

M

M

M

M

M

M

M

GCR

M

Artifactory

M

MF

MF

MF

MF

MF

MF

MF

MF

Nexus

M

MF

MF

MF

MF

MF

MF

MF

MF

Bamboo

MF

MF

MF

MF

MF

MF

MF

MF

SMB

M

M

M

M

M

M

M

M

SFTP

M

M

M

M

M

M

M

M

Docker Image Artifacts

If the Service Type is Docker Image, at runtime the Harness Delegate will use the metadata to initiate a pull command on the deployment target host(s) and pull the artifact from the registry (Docker Registry, ECR, etc) to the target host. To ensure the success of this artifact pull, the host(s) running the target host(s) must have network connectivity to the registry.

The Copy Artifact script in a Service does not apply to Docker artifacts. For Services using Docker artifacts, at runtime the Harness Delegate will execute a Docker pull on the target host to pull the artifact. Ensure the target host has network connectivity to the Docker artifact server.

File-based Artifact Sources

When you select an Artifact Source and specify file-based only (where the Metadata-Only option is not selected), Harness copies and stores the file in the Harness Artifact data store. At runtime, the Harness Delegate copies the file to the target host(s).

Metadata-only Artifact Sources

Most artifact sources support metadata. For example, with Amazon CodeDeploy, to help manage your Amazon EC2 instances and on-prem instances, you can use tags to assign your own metadata to each resource. For SFTP, the metadata commonly includes a file path.

For an Artifact Source with the Metadata-Only option selected, Harness stores the metadata. During runtime, Harness passes the metadata to the target host(s) where it is used to obtain the artifact(s). Ensure that the target host has network connectivity to the Artifact Server.

You can add scripts to the Service to use the metadata:

  • For Amazon S3, you can add the Copy Artifact script. Typically, the script is generated by Harness automatically.
  • For Amazon S3 or Artifactory, you can add the Download Artifact script.
    The Copy and Download Artifact scripts are described later in this document.
  • For all other artifact sources, you can use the Exec script to use the metadata to copy or download the artifact in all artifact sources.

Exec Script

For all Service types, the Exec script can be added to the Service to use the artifact source metadata and copy or download the artifact.

Add an Exec Script

  1. In the Service, click Add Command. The Add Command dialog appears.
    In Command Type, select the Install.
    The command is added to the Service. There is also an Install command in the Template Library that is preset with common variables.
    For information on using the Template Library, see Use Templates.
  2. Hover over the Add button to see the available scripts.
  3. Under Scripts, click Exec. The Exec dialog appears.
  4. In Working Directory, enter the working directory on the target host from which the Harness Delegate will run the Bash or PowerShell script, such as /tmp on Linux and %TEMP% on Windows. By default, and if Working Directory is left empty, the script is executed in the home directory.
  5. Add the commands needed to install your microservice using the metadata and click SUBMIT. The Exec script is added to the Service.

To build your Exec script, for example a cURL script, you can use the built-in Harness variables to refer to the Artifact Sources. For example, the built-in variable ${artifact.url}. Simply enter ${ in the Command field to see the list of variables.

When you create a Workflow using this Service, such as a Basic Workflow, it will include an Install step that will execute the Exec script.

For a list of artifact-related built-in variables, see Artifact in the table in Variables List.

Copy Artifact Script

The Copy Artifact command is supported for Artifact Sources that use Amazon S3. The behavior of the command differs depending on whether filed-based or metadata is selected for the Artifact Source:

  • File-based (Metadata-only option not selected) - Harness copies and stores the file in the Harness Artifact data store. At deployment runtime, the Harness Delegate downloads the file from the Harness Artifact data store and copies it to the deployment target host. To ensure the success of this file copy, the host(s) running the Harness Delegate(s) must have network connectivity to the target deployment host(s).
  • Metadata-only - During runtime, the Harness Delegate downloads the file directly onto the target host(s) from Amazon S3. To ensure the success of this download, the host(s) running the Harness Delegate(s) must have network connectivity to Amazon S3 and the target deployment host(s).

Adding a Copy Artifact Script

To add a Copy Artifact script in a Service, do the following:

  1. Click Add Command and in Command Type, select the Install.
    The command is added to the Service.
  2. Hover over the Add step to see the Copy script, and click Copy.
    The Copy script appears.
    The $WINGS_RUNTIME_PATH is the destination path for the artifact. The variable is a constant used at runtime. For more information, see Constants.
  3. In Source, select Application Artifacts, and click SUBMIT. The Copy script is added to the Service.

Download Artifact Script

The Download Artifact script is supported for Amazon S3, Artifactory, Azure Artifacts, and SMB and SFTP (Powershell-only) artifact sources. Many Service types include default Download Artifact scripts, such the IIS Website type, but the script is only supported for the Amazon S3, Artifactory, Azure, and SMB and SFTP (Powershell-only) artifact sources. For other artifact sources, add a new command and use the Exec script to download the artifact.

For Services that use Amazon S3, Artifactory, Azure Artifacts, and SMB and SFTP (Powershell-only) sources, at deployment, the Harness Delegate will run the Download Artifact script on the deployment target host and download the artifact.

Here is the Download Artifact script dialog:

In Artifact Download Directory, a variable points to the Application Defaults variables.

You can add an Application Defaults variable for the artifact sources that will be referenced by the Download Artifact script, and then use it wherever Download Artifact is used. For more information, see Application Defaults Variables.

Copy and Download Artifact Provider Support

The following table lists the providers supported by the Copy and Download Artifact commands in a Service.

Legend:

  • Y: Yes
  • N: No
  • N/A: Not Applicable

Provider

Repository/Package Types

Download Artifact

(WinRM or SSH Service types only)

Copy Artifact

(SSH Service type only)

AWS S3

All

Y

Y

Artifactory

Non-Docker

Y

Y

Currently, this is behind a Feature Flag. Contact Harness to enable this option.

Docker

N/A

N/A

SMB

IIS related

Y

N/A

SFTP

IIS related

Y

N/A

Jenkins:

Metadata-only

All

N

N

To use the metadata, use the Exec script to copy the artifact to the target host.

Jenkins:

Artifact is saved to Harness

All

N

Y

Docker Registry

Docker

N/A

N/A

AWS AMI

AMI

N/A

N/A

AWS ECR

Docker

N/A

N/A

Google Cloud Storage

Docker

N/A

N/A

Google Container Registry

Docker

N/A

N/A

Nexus 2.x/ 3.x:

Artifact is saved to Harness

Maven 2.0

N

Y

NPM

N

Y

NuGet

N

Y

Docker

N/A

N/A

Nexus 2.x/ 3.x:

Metadata-only

Maven 2.0

N

N

To use the metadata, use the Exec script to copy the artifact to the target host.

NPM

N

N

To use the metadata, use the Exec script to copy the artifact to the target host.

NuGet

N

N

To use the metadata, use the Exec script to copy the artifact to the target host.

Docker

N/A

N/A

Bamboo:

Metadata-only

All

N

N

To use the metadata, use the Exec script to copy the artifact to the target host.

Bamboo:

Artifact is saved to Harness

All

N

Y

Azure Artifacts

Maven 2.0, NuGet

Y

Y

Custom Repository

All

N/A

N (use the Exec script to use the metadata to copy artifact to target host)


How did we do?