This topic outlines how Harness queues Workflows, to prevent conflicts when two or more Workflows simultaneously deploy the same Harness Service to the same Harness Infrastructure Definition.
In this topic:
- Before You Begin
- Using Concurrency Strategy to Control Queuing
- Synchronization Not Required Best Practices
- How Harness Locks Infrastructure
- Acquiring Resource Locks
- Queuing in Action
- Queuing with Infrastructure Provisioners
- Next Steps
Before You Begin
Ensure that you understand the following:
When multiple Harness Workflows simultaneously deploy to the same infrastructure, this can generate conflicts. To prevent such conflicts, Harness normally places a resource lock on the infrastructure, and queues the Workflows in FIFO (First In, First Out) order.
You can override this behavior, as covered below. Queuing is particularly valuable for Pipelines that execute multiple Workflows in parallel.
Harness allows a maximum of 20 Workflow executions in the queue that locks an infrastructure. Subsequent Workflows using that infrastructure will fail if the queue is full.
The queue limitation is to prevent a misconfigured Trigger or other execution mechanism from overloading your queue and preventing important deployments.
Using Concurrency Strategy to Control Queuing
By default, Harness Workflows have their Concurrency Strategy set to Acquire lock on the targeted service infrastructure. This is the setting that enables queuing for shared infrastructure.
To exempt a Workflow from queuing behavior, click the pencil icon to open the Concurrency Strategy dialog shown here. Set the Concurrency Control drop-down to Synchronization not required, and click Submit.
Synchronization Not Required Best Practices
The golden rule with Synchronization Not Required is: Only use a concurrency strategy of Synchronization Not Required when it does not matter if multiple Workflows are running concurrently.
For example, if a Workflow simply hits an HTTP endpoint to post a message and message order does not matter, or for some other operation that is already an encapsulated transaction.
When to use Acquire lock on the targeted service infrastructure:
- If concurrently running Workflows are acting on the same infrastructure. Running Workflows like this in synch can cause interference in many ways.
- For any Workflow that modifies state over time to reach a new state, it must have a concurrency strategy that causes the Workflows to queue rather than overlap.
How Harness Locks Infrastructure
Here is an example of how infrastructure locking works. In this Harness Infrastructure Definition, the Kubernetes cluster's Namespace field is populated by the variable
In this Kubernetes Workflow, we define the corresponding
In the Workflow Variables dialog, we've assigned the variable no default value:
As we begin deployment of this Workflow, we assign the variable the value
Once the Workflow deploys, the Details panel confirms that a lock has been placed on the resulting
namespace: target and Harness Service combination:
If we open the details page for the Infrastructure Definition that we started with, it displays a newly created Infrastructure Mapping for the
target namespace we specified:
Acquiring Resource Locks
When pending Workflows are contending for shared infrastructure, Harness uses the above mechanism to place the Workflows in a resource lock queue. The first-launched Workflow gets a resource lock on this infrastructure, which it holds until its deployment is resolved. This temporarily blocks other Workflows from using the infrastructure—only one Workflow at a time can have a lock on a given infrastructure.
When contending for shared infrastructure, most Workflows will therefore display an Acquire Resource Lock step in Harness' Deployments page:
This step appears even if no queue is present, because it's specified by Harness' Acquire lock on the targeted service infrastructure default setting. There are two exceptions:
- No Acquire Resource Lock will appear in Workflows of Build deployment type.
- No Acquire Resource Lock step will appear in Workflows that have been configured to ignore queueing.
Harness seeks to acquire a Resource Lock only once per Workflow. The lock's scope is the current Workflow. The Acquire Resource Lock step occurs in the first deployment or setup phase that follows any Pre‑Deployment phase or step.
This example, using a Pivotal Cloud Foundry Blue/Green Workflow, shows the Acquire Resource Lock step's typical position in a deployment:
Queuing in Action
Let's look at a full example of how queuing works. Assume that we have two similar Harness Kubernetes Workflows,
gke demo and
gke demo-template-clone. Both deploy to the same infrastructure, because they share the same Infrastructure Definition.
When both Workflows are deployed simultaneously, Harness might initiate the deployment of
gke demo slightly before
gke demo‑template‑clone. This scenario applies to a Pipeline whose stages run these Workflows in parallel, but in that order:
If we examine the
gke demo-template-clone Workflow's Deployments page, we might initially see something like this:
This deployment is paused at its Acquire Resource Lock step. The Details panel shows why:
gke demo-template-clone deployment is second in the Resource Lock Queue, so it currently has BLOCKED status.
gke demo is first in the queue. If we immediately switch to its Deployments page, we might see something like this:
This Workflow's Acquire Resource Lock step has completed—it has acquired the lock. The Details panel confirms that this first-in-queue deployment has ACTIVE status:
gke demo Workflow can now proceed through completion:
gke demo Workflow has completed deployment, this clears the queue. Therefore, the
gke demo‑template‑clone can now acquire the lock, and proceed to deploy:
Queuing with Infrastructure Provisioners
Workflows incorporating Infrastructure Provisioners are queued the same way as Workflows based on predefined infrastructure. Infrastructure Provisioners commands are always added in the Workflow's Pre‑Deployment phase. This sets up the new infrastructure, enabling Harness to properly queue and lock deployments to that infrastructure in the following phase.