Custom Metrics Verification

Updated 4 months ago by Michael Cretzman

If you need to use a verification provider other than those listed in the Harness verification provider list, you can connect to a custom verification provider and use it in your deployment workflows.

You can add a Custom Metrics Provider verification step to your workflow and the provider will be used by Harness to verify the performance and quality of your deployments using Harness machine-learning verification analysis.

First, ensure that your verification provider isn't already supported in Harness by default:

  1. Click Setup.
  2. Click Connectors.
  3. Click Verification Providers.
  4. Click Add Verification Provider, and see if your verification provider is listed.

If it is not listed, then follow the steps in this guide.

Verification Setup Overview

You set up your Custom Metrics Provider and Harness in the following way:

  1. Using your Custom Metrics Provider, you monitor your microservice or application.
  2. In Harness, you connect Harness to your Custom Metrics Provider account, adding the Custom Metrics Provider as a Harness Verification Provider.
  3. After you have built run a successful deployment of your microservice or application in Harness, you then add an APM Verification step(s) to your Harness deployment workflow.
  4. Harness uses your Custom Metrics Provider to verify your future microservice/application deployments.
  5. Harness Continuous Verification uses unsupervised machine-learning to analyze your deployments and Custom Metrics Provider analytics, discovering events that might be causing your deployments to fail. Then you can use this information to improve your deployments.

New Relic Insights as a Provider

To demonstrate how to use a Custom Metrics Provider, this guide will use New Relic Insights. New Relic Insights allows you to analyze user behavior, transactions, and more using New Relic's other products.

For more information on New Relic Insights, see their documentation.

Analytics with New Relic Insights

Harness Analysis

Intended Audience

  • DevOps

Before You Begin

Connect to Custom Metrics Provider

Connect Harness to a Custom Metrics Provider (New Relic Insights in this example) to have Harness verify the success of your deployments. Harness will use your tools to verify deployments and use its machine learning features to identify sources of failures.

To connect a custom metrics provider, do the following:

  1. Click Setup.
  2. Click Connectors.
  3. Click Verification Providers.
  4. Click Add Verification Provider, and click Custom Verification.

    The Metrics Data Provider dialog appears.

In the Metrics Data Provider dialog, you can configure how Harness can query event data via API.

For example, with New Relic Insights, you are configuring the Metrics Data Provider dialog to perform a cURL request like the following:

curl -H "Accept: application/json" -H "X-Query-Key: YOUR_QUERY_KEY" "https://insights-api.newrelic.com/v1/accounts/YOUR_ACCOUNT_ID/query?nrql=YOUR_QUERY_STRING"
To query event data via API in New Relic Insights, you will need to set up an API key in New Relic. For more information, see Query Insights event data via API from New Relic.

The Metrics Data Provider dialog has the following fields.

Field

Description

Type

Select Metrics Data Provider.

Base URL

Enter the URL for API requests. For example, in New Relic Insights, you can change the default URL to get the Base URL for the API.

Default URL: https://insights.newrelic.com/accounts/12121212

Base URL for API: https://insights-api.newrelic.com/v1/accounts/12121212

Headers

Add the query headers required by your metrics data provider. For New Relic Insights, do the following:

  1. Click Add Headers.
  2. In Key, enter X-Query-Key. For New Relic, a X-Query-Key must contain a valid query key.
  3. In Value, enter the API key you got from New Relic.
  4. Click the checkbox under Encrypted Value to encrypt the key.
  5. Click Add Headers again.
  6. In Key, enter Accept. This is for the Content-Type of a query.
  7. In Value, enter application/json. The Content-Type of a query must be application/json.

Parameters

Add any request parameters that do not change for every request.

Validation Path

Enter the query string from your metric provider. The resulting URL ({base_URL}/{validation_path}) is used to validate the connection to the metric provider. This query is invoked with the headers and parameters defined here.

For example, in New Relic Insights, you can take the query from the NRQL> field and add it to the string query?nrql=, for example:

query?nrql=SELECT%20average%28duration%29%20FROM%20PageView

The field accepts URL encoded or unencoded queries.

Display Name

The name for this verification provider connector in Harness. This is the name you will use to reference this connection whenever you add a verification step to a workflow.

Verify with Custom Metrics

The following procedure describes how to add a custom metric verification step in a Harness workflow using New Relic Insights. For more information about workflows, see Add a Workflow.

Once you run a deployment and your custom metrics provider preforms verification, Harness machine-learning verification analysis will assess the risk level of the deployment.

In order to obtain the names of the host(s), pod(s), or container(s) where your service is deployed, the verification provider should be added to your workflow after you have run at least one successful deployment.

To verify your deployment with a custom metric, do the following:

  1. Ensure that you have added Custom Metric Provider as a verification provider, as described above.
  2. In your workflow, under Verify Service, click Add Verification.
  3. In the Add Command popover, click APM Verification.

    The Metrics Verification State dialog appears.

    The Metrics Verification State settings are common to all verification provider dialogs in workflows. The Metric Collection fields are described later in this guide.

    Field

    Description

    Expression for Host/Container

    The expression entered here should resolve to a host/container name in your deployment environment. By default, the expression is ${host.hostName}. If you begin typing the expression into the field, the field provides expression assistance.

    Analysis Time duration

    Set the duration for the verification step. If a verification step exceeds the value, the workflow Failure Strategy is triggered. For example, if the Failure Strategy is Ignore, then the verification state is marked Failed but the workflow execution continues.

    Baseline for Risk Analysis

    Select Previous Analysis to have this verification use the previous analysis for a baseline comparison. If your workflow is a Canary workflow type, you can select Canary Analysis to have this verification compare old versions of nodes to new versions of nodes in real-time.

    Execute with previous steps

    Check this checkbox to run this verification step in parallel with the previous steps in Verify Service.

    Failure Criteria

    Specify the sensitivity of the failure criteria. When the criteria is met, the workflow Failure Strategy is triggered.

    Include instances from previous phases

    If you are using this verification step in a multi-phase deployment, select this checkbox to include instances used in previous phases when collecting data. Do not apply this setting to the first phase in a multi-phase deployment.

    Wait interval before execution

    Set how long the deployment process should wait before executing the verification step.

  4. In Metrics Data Provider, select the custom metric provider you added, described above.
  5. Add a Metric Collections section.

Metric Collections is covered in the following table.

Field

Description

Metrics Name

Enter a name for the type of error you are collecting, such as HttpErrors.

Metrics Type

Select the type of metric you want to collect:

  • Infra - Infrastructure metrics, such as CPU, memory, and HTTP errors.
  • Value - Apdex (measures user satisfaction with response time).
  • Response Time - The amount of time the application takes to return a request.
  • Throughput - The average number of units processed per time unit.
  • Error - Errors reported.

Metrics Group

Enter the name to use in the Harness UI to identify this metrics group. For example, if you entered NRHttp in this field, when the workflow is deployed, the verification output in Harness would look like this:

Metrics Collection URL

Enter a query for your verification. You can simply make the query in your Verification Provider and paste it in this field. For example, in New Relic Insights, you might have the following query:


You can paste the query into the Metrics Collection URL field:

For information on New Relic Insights NRSQL, see NRQL syntax, components, functions from New Relic.

The time range for a query (SINCE clause in our example) should be less than 5 minutes to avoid overstepping the time limit for some verification providers.

Response Type

Select the format type for the response. This is typically JSON.

Response Mapping

These settings are for specifying which JSON fields in the responses to use.

Transaction Name

Fixed: Use this option when all metrics are for the same transaction. For example, a single login page.

Dynamic: Use this option when the metrics are for multiple transactions.

Transaction Name Path

This is the JSON label for identifying a transaction name. In the case of our example New Relic Insights query, the FACET clause is used to group results by the attribute transactionName. You can obtain the field name that rcords the transactionName by using the Guide From Example feature:

  1. Click Guide From Example. The Select key from example popover appears.
    The Metrics URL Collection is based on the query you entered in the Metric Collection URL field earlier.
  2. Click Submit. The query is executed and the JSON is returned.
  3. Locate the field name that is used to identify transactions. In our New Relic Insights query, it is the facets.name field.
    In New Relic Insights, you can find the name in the JSON of your query results.
  4. Click the field name under facets. The field path is added to the Transaction Name Path field.

Metrics Value

Specify the value for the event count. This is used to filter and aggregate data returned in a SELECT statement. To find the correct label for the value, do the following:

  1. Click Guide From Example. The Select key from example popover appears.
    The Metrics URL Collection is based on the query you entered in the Metric Collection URL field earlier.
  2. Click Submit. The query is executed and the JSON is returned.
  3. Locate the field name that is used to count events. In our New Relic Insights query, it is the facets.timeSeries.results.count field.
    In New Relic Insights, you can find the name in the JSON of your query results.
  4. Click the name of the field count. The field path is added to the Metrics Value field.

Timestamp

Specify the value for the timestamp in the query. To find the correct label for the value, do the following:

  1. Click Guide From Example. The Select key from example popover appears.
    The Metrics URL Collection is based on the query you entered in the Metric Collection URL field earlier.
  2. Click Submit. The query is executed and the JSON is returned.
  3. Locate the field name that is used for the time series endTimeSeconds. In our New Relic Insights query, it is the facets.timeSeries.endTimeSeconds field.
    In New Relic Insights, you can find the name in the JSON of your query results.
  4. Click the name of the field endTimeSeconds. The field path is added to the Timestamp field.

Timestamp Format

Enter a timestamp format. The format follows the Java SimpleDateFormat. For example, a timestamp syntax might be yyyy-MM-dd'T'HH:mm:ss.SSSX.

Notes

  • Depending on the custom metric provider you select, you might need to provide different information to the Metric Collections section. For example, you might need to provide a hostname for the Guide From Example popover to use to retrieve data. The hostname will be the host/container/pod/node name where the artifact is deployed. In you look in the JSON for the deployment environment, the hostname is typically the name label under the host label.
  • The Compare With Previous Run option is used for Canary deployments where the second phase is compared to the first phase, and the third phase is compared to the second phase, and so on. Do not use this setting in a single phase workflow or in the first phase of a multi-phase workflow.

Verification Results

Once you have deployed your workflow (or pipeline) using the Custom verification step, you can automatically verify cloud application and infrastructure performance across your deployment.

Workflow Verification

To see the results of Harness machine-learning evaluation of your Custom verification, in your workflow or pipeline deployment you can expand the Verify Service step and then click the APM Verification step.

Continuous Verification

You can also see the evaluation in the Continuous Verification dashboard. The workflow verification view is for the DevOps user who developed the workflow. The Continuous Verification dashboard is where all future deployments are displayed for developers and others interested in deployment analysis.

To learn about the verification analysis features, see the following sections.

Deployments

Deployment info
See the verification analysis for each deployment, with information on its service, environment, pipeline, and workflows.

Verification phases and providers
See the vertfication phases for each vertfication provider. Click each provider for logs and analysis.

Verification timeline
See when each deployment and verification was performed.

Transaction Analysis

Execution details
See the details of verification execution. Total is the total time the verification step took, and Analysis duration is how long the analysis took.

Risk level analysis
Get an overall risk level and view the cluster chart to see events.

Transaction-level summary
See a summary of each transaction with the query string, error values comparison, and a risk analysis summary.

Execution Analysis

Event type
Filter cluster chart events by Unknown Event, Unexpected Frequency, Anticipated Event, Baseline Event, and Ignore Event.

Cluster chart
View the chart to see how the selected event contrast. Click each event to see its log details.

Event Management

Event-level analysis
See the threat level for each event captured.

Tune event capture
Remove events from analysis at the service, workflow, execution, or overall level.

Event distribution
Click the chart icon to see an event distribution including the measured data, baseline data, and event frequency.

Next Steps


How did we do?