Custom Metrics and Logs Verification

Updated 1 week ago by Michael Cretzman

The following sections describe how to use Harness Custom Metrics and Logs Verification for APMs and logging tools not already supported in Harness:

If you need to use a verification provider other than those listed in the Harness verification provider list, you can connect to a custom verification provider and use it in your deployment workflows.

You can add a Custom Metrics or Logs Provider verification step to your Workflow and the provider will be used by Harness to verify the performance and quality of your deployments using Harness machine-learning verification analysis.

First, ensure that your Verification Provider isn't already supported in Harness by default:

  1. Click Setup.
  2. Click Connectors.
  3. Click Verification Providers.
  4. Click Add Verification Provider, and see if your verification provider is listed.

If it is not listed, then follow the steps in this guide.

Verification Setup Overview

You set up your Custom Metrics or Logs Provider and Harness in the following way:

  1. Using your Custom Metrics or Logs Provider, you monitor your microservice or application.
  2. In Harness, you connect Harness to your Custom Metrics or Logs Provider account, adding the Custom Metrics or Logs Provider as a Harness Verification Provider.
  3. After you have run a successful deployment of your microservice or application in Harness, you then add an Verification step(s) to your Harness deployment Workflow.
  4. Harness uses your Custom Metrics or Logs Provider to verify your future microservice/application deployments.
  5. Harness Continuous Verification uses unsupervised machine-learning to analyze your deployments and Custom Metrics or Logs Provider analytics, discovering events that might be causing your deployments to fail. Then you can use this information to improve your deployments.

New Relic Insights as a Provider

To demonstrate how to use a Custom Metrics Provider, this guide will use New Relic Insights. New Relic Insights allows you to analyze user behavior, transactions, and more using New Relic's other products.

Harness includes native support for New Relic.

For more information on New Relic Insights, see their documentation.

Analytics with New Relic Insights

Harness Analysis

Connect to Custom Provider

Connect Harness to a Custom Metrics of Logs Provider to have Harness verify the success of your deployments. Harness will use your tools to verify deployments and use its machine learning features to identify sources of failures.

To connect a custom metrics or logs provider, do the following:

  1. Click Setup.
  2. Click Connectors.
  3. Click Verification Providers.
  4. Click Add Verification Provider, and click Custom Verification.

The Metrics/Logs Data Provider dialog appears. In Type, select Metrics Data Provider or Custom Logs Provider.

Metrics Data Provider

In the Metrics Data Provider dialog, you can configure how Harness can query event data via API.

For example, with New Relic Insights, you are configuring the Metrics Data Provider dialog to perform a cURL request like the following:

curl -H "Accept: application/json" \
-H "X-Query-Key: YOUR_QUERY_KEY" \
"https://insights-api.newrelic.com/v1/accounts/YOUR_ACCOUNT_ID/query?nrql=YOUR_QUERY_STRING"
To query event data via API in New Relic Insights, you will need to set up an API key in New Relic. For more information, see Query Insights event data via API from New Relic.

The purpose of the Metrics Data Provider dialog is to validate the credentials and validation path you enter and return an HTTP 200 from your metrics provider.

The Metrics Data Provider dialog has the following fields.

Field

Description

Type

Select Metrics Data Provider.

Display Name

The name for this Verification Provider connector in Harness. This is the name you will use to reference this Verification Provider whenever you use it to add a verification step to a Workflow.

Base URL

Enter the URL for API requests. For example, in New Relic Insights, you can change the default URL to get the Base URL for the API.

Default URL: https://insights.newrelic.com/accounts/12121212

Base URL for API: https://insights-api.newrelic.com/v1/accounts/12121212

Headers

Add the query headers required by your metrics data provider. For New Relic Insights, do the following:

  1. Click Add Headers.
  2. In Key, enter X-Query-Key. For New Relic, a X-Query-Key must contain a valid query key.
  3. In Value, enter the API key you got from New Relic.
  4. Click the checkbox under Encrypted Value to encrypt the key.
  5. Click Add Headers again.
  6. In Key, enter Accept. This is for the Content-Type of a query.
  7. In Value, enter application/json. The Content-Type of a query must be application/json.

Parameters

Add any request parameters that do not change for every request.

Validation Path

In Path, you will define a validation path. Enter the query string from your metric provider. The resulting URL ({base_URL}/{validation_path}) is used to validate the connection to the metric provider. This query is invoked with the headers and parameters defined here.

For example, in New Relic Insights, you can take the query from the NRQL> field and add it to the string query?nrql=, for example:

query?nrql=SELECT%20average%28duration%29%20FROM%20PageView

The field accepts URL encoded or unencoded queries.

If you select POST, the Body field appears. Enter a sample JSON body to send as the payload when making the call to the APM provider. The requirements of the JSON body will depend on your APM provider.

Custom Logs Provider

For any log providers without native support in Harness, you can specify the API calls needed to make a connection. The purpose of the Custom Logs Provider settings are to validate the credentials and validation path you enter and return an HTTP 200 from your logs provider.

To add a Custom Logs Provider, do the following:

  1. In Display Name, give the Verification Provider a name. You will use this name to select this provider in a Workflow.
  2. In Base URL, enter the base URL of the REST endpoint where Harness will connect. Often, the URL is the server name followed by the index name, such as http://server_name/index_name.
  3. In Parameters, click Add Parameters, and add any required parameters.
  4. In Validation Path, you will define a validation path used by Harness to validate the connection and ensure a Harness Delegate can reach the provider. Harness expects an HTTP 200 response.

When you are finished, the dialog will look something like this:

Click TEST to validate the settings and SUBMIT to add the Verification Provider.

Verify with Custom Metrics or Logs

The following procedure describes how to add a custom APM (metrics) or Logs verification step in a Harness Workflow. For more information about Workflows, see Add a Workflow.

Once you run a deployment and your custom metrics or logs provider obtains its data, Harness machine-learning verification analysis will assess the risk level of the deployment using the data from the provider.

In order to obtain the names of the host(s), pod(s), or container(s) where your service is deployed, the verification provider should be added to your Workflow after you have run at least one successful deployment.

To verify your deployment with a custom metric or log provider, do the following:

  1. Ensure that you have added Custom Metrics or Logs Provider as a verification provider, as described in Connect to Custom Provider.
  2. In your workflow, under Verify Service, click Add Verification.

  3. In the Add Command popover, click APM Verification or Log Verification. The Metrics Verification State or Custom Logs Verification dialog appears.

As you can see, the dialogs are the same except for the Metric Collections Settings and Log Collection Settings sections. For other, common settings, see Common Settings.

Metric Collections Settings

  1. In Metrics Data Provider, select the custom metric provider you added, described in Connect to Custom Provider.
  2. Add a Metric Collections section.

Metric Collections is covered in the following table.

Field

Description

Metrics Name

Enter a name for the type of error you are collecting, such as HttpErrors.

Metrics Type

Select the type of metric you want to collect:

  • Infra - Infrastructure metrics, such as CPU, memory, and HTTP errors.
  • Value - Apdex (measures user satisfaction with response time).
  • Response Time - The amount of time the application takes to return a request.
  • Throughput - The average number of units processed per time unit.
  • Error - Errors reported.

Metrics Group

Enter the name to use in the Harness UI to identify this metrics group. For example, if you entered NRHttp in this field, when the workflow is deployed, the verification output in Harness would look like this:

Metrics Collection URL

Enter a query for your verification. You can simply make the query in your Verification Provider and paste it in this field. For example, in New Relic Insights, you might have the following query:


You can paste the query into the Metrics Collection URL field:

For information on New Relic Insights NRSQL, see NRQL syntax, components, functions from New Relic.

The time range for a query (SINCE clause in our example) should be less than 5 minutes to avoid overstepping the time limit for some verification providers.

Response Type

Select the format type for the response. This is typically JSON.

Metrics Method

Select GET or POST. If you select POST, the Metric Collection Body field appears.

In Metric Collection Body, enter the JSON body that to send as a payload when making a REST call to the APM Provider. The requirements of the JSON body will depend on your APM provider.

You can use variables you created in the Service and Workflow in the JSON, as well as Harness built-in variables.

Response Mapping

These settings are for specifying which JSON fields in the responses to use.

Transaction Name

Fixed: Use this option when all metrics are for the same transaction. For example, a single login page.

Dynamic: Use this option when the metrics are for multiple transactions.

Transaction Name Path

This is the JSON label for identifying a transaction name. In the case of our example New Relic Insights query, the FACET clause is used to group results by the attribute transactionName. You can obtain the field name that rcords the transactionName by using the Guide From Example feature:

  1. Click Guide From Example. The Select key from example popover appears.
    The Metrics URL Collection is based on the query you entered in the Metric Collection URL field earlier.
  2. Click Submit. The query is executed and the JSON is returned.
  3. Locate the field name that is used to identify transactions. In our New Relic Insights query, it is the facets.name field.
    In New Relic Insights, you can find the name in the JSON of your query results.
  4. Click the field name under facets. The field path is added to the Transaction Name Path field.

Metrics Value

Specify the value for the event count. This is used to filter and aggregate data returned in a SELECT statement. To find the correct label for the value, do the following:

  1. Click Guide From Example. The Select key from example popover appears.
    The Metrics URL Collection is based on the query you entered in the Metric Collection URL field earlier.
  2. Click Submit. The query is executed and the JSON is returned.
  3. Locate the field name that is used to count events. In our New Relic Insights query, it is the facets.timeSeries.results.count field.
    In New Relic Insights, you can find the name in the JSON of your query results.
  4. Click the name of the field count. The field path is added to the Metrics Value field.

Timestamp

Specify the value for the timestamp in the query. To find the correct label for the value, do the following:

  1. Click Guide From Example. The Select key from example popover appears.
    The Metrics URL Collection is based on the query you entered in the Metric Collection URL field earlier.
  2. Click Submit. The query is executed and the JSON is returned.
  3. Locate the field name that is used for the time series endTimeSeconds. In our New Relic Insights query, it is the facets.timeSeries.endTimeSeconds field.
    In New Relic Insights, you can find the name in the JSON of your query results.
  4. Click the name of the field endTimeSeconds. The field path is added to the Timestamp field.

Timestamp Format

Enter a timestamp format. The format follows the Java SimpleDateFormat. For example, a timestamp syntax might be yyyy-MM-dd'T'HH:mm:ss.SSSX. If you leave this field empty, Harness will use the default range of 1 hour previous (now-1h) to now.

When you are done, the dialog will look something like this:

Log Collection Settings

  1. In Logs Data Provider, select the custom logs provider you added, described in Connect to Custom Provider.
  2. Click Add Collection Details to add a Logs Collections section.

  • In Log Collection URL, enter the API query that will return a JSON response. When you enter a query, the Response Mapping settings appear. In this section you will map the keys in the JSON response to Harness fields to identify where data such as the hostname and timestamp are located in the JSON response.

  • In Response Type,- Select JSON.
  • In Metrics Method - Select GET or POST. If you select POST, you also enable a Log Collection Body field where you can enter any HTTP body to send as part of the query.
  • In Log Message JSON Path - Use Guide from an example to query the log provider and return the JSON response.

The URL is a combination of the Verification Cloud Provider Base URL and the Log Collection URL you entered.

Click SEND. In the JSON response, click the key that includes the log message path.

The log message path key is added to Log Message JSON Path:

  • Hostname JSON Path - Use Guide from an example to query the log provider and return the JSON response. In the JSON response, click the key that includes the hostname path.

  • Regex to transform Host Name - If the JSON value returned requires transformation in order to be used, enter the regex expression here. For example: If the value in the host name JSON path of the response is pod_name:harness-test.pod.name and the actual pod name is simply harness-test.pod.name, you can write a regular expression to remove the pod_name from the response value.
  • Timestamp JSON path - Use Guide from an example to query the log provider and return the JSON response. In the JSON response, click the key that includes the timestamp.

  • Timestamp Format - Enter a timestamp format. The format follows the Java SimpleDateFormat. For example, a timestamp syntax might be yyyy-MM-dd'T'HH:mm:ss.SSSX. If you leave this field empty, Harness will use the default range of 1 hour previous (now - 1h) to now.

Click ADD. The Log Collection is added.

Common Settings

The following settings are common to both dialogs.

Field

Description

Expression for Host/Container

The expression entered here should resolve to a host/container name in your deployment environment. By default, the expression is ${host.hostName}. If you begin typing the expression into the field, the field provides expression assistance.

Analysis Time duration

Set the duration for the verification step. If a verification step exceeds the value, the workflow Failure Strategy is triggered. For example, if the Failure Strategy is Ignore, then the verification state is marked Failed but the workflow execution continues.

Baseline for Risk Analysis

Select Previous Analysis to have this verification use the previous analysis for a baseline comparison. If your workflow is a Canary workflow type, you can select Canary Analysis to have this verification compare old versions of nodes to new versions of nodes in real-time.

Execute with previous steps

Check this checkbox to run this verification step in parallel with the previous steps in Verify Service.

Failure Criteria

Specify the sensitivity of the failure criteria. When the criteria is met, the workflow Failure Strategy is triggered.

Include instances from previous phases

If you are using this verification step in a multi-phase deployment, select this checkbox to include instances used in previous phases when collecting data. Do not apply this setting to the first phase in a multi-phase deployment.

Wait interval before execution

Set how long the deployment process should wait before executing the verification step.

Notes

  • Depending on the custom metric provider you select, you might need to provide different information to the Metric Collections section. For example, you might need to provide a hostname for the Guide From Example popover to use to retrieve data. The hostname will be the host/container/pod/node name where the artifact is deployed. In you look in the JSON for the deployment environment, the hostname is typically the name label under the host label.
  • The Compare With Previous Run option is used for Canary deployments where the second phase is compared to the first phase, and the third phase is compared to the second phase, and so on. Do not use this setting in a single phase workflow or in the first phase of a multi-phase workflow.

Datadog as a Provider

Currently, Datadog-Harness integration is for Kubernetes deployments only. To use Datadog with other deployment types, such as ECS, use the following example of how to use the Custom Metrics Provider with Datadog.

Custom Verification Provider

To add a Custom Metrics Provider using Datadog, do the following:

  1. In Harness, click Setup.
  2. Click Connectors.
  3. Click Verification Providers.
  4. Click Add Verification Provider, and click Custom Verification. The Metrics Data Provider dialog appears.
  5. In Type, select Metrics Data Provider.
  6. In Display Name, give the Verification Provider a name. You will use this name to select this provider in a Workflow.
  7. In Base URL, enter https://app.datadoghq.com/api/v1/.
  8. In Parameters, click Add Parameters, and add the following parameters.

Key

Value

Encrypted Value

api_key

Enter the API key.

Checked

application_key

Enter the application key.

Checked

If you need help obtaining the API and Application keys, see the following:

API Key

To create an API key in Datadog, do the following:

  1. In Datadog, mouseover Integrations, and then click APIs.
    The APIs page appears.
  2. In API Keys, in New API key, enter the name for the new API key, such as Harness, and then click Create API key.
  3. Copy the API key and, in Harness, paste it into the Value field.

Application Key

To create an application key in Datadog, do the following:

  1. In Datadog, mouseover Integrations, and then click APIs. The APIs page appears.
  2. In Application Keys, in New application key, enter a name for the application key, such as Harness, and click Create Application Key.
  3. Copy the API key and, in Harness, paste it into the Value field.
  1. In Validation Path, enter metrics?from=1527102292. This is the epoch seconds value used to ensure an HTTP 200 response with the credentials validated.

When you are finished, the dialog will look something like this:

Workflow Verification

Now that you have the Verification Provider set up, you can use it as a verification step in a Workflow. In the procedure below, we will provide information on how to select the Datadog metrics you need.

  1. In Harness, open the Workflow that deploys the service you are monitoring with Datadog. You add verification steps after you have performed at least one successful deployment.
  2. In the Workflow, in Verify Service, click Add Verification.
  3. Under Verifications, click APM Verification. The Metrics Verification State dialog appears. All of the settings in Metrics Verification State are Harness settings for collecting and grouping metrics except for the settings where you will map JSON response keys to Harness fields.
  4. Fill out the Metrics Verification State dialog using the following information. You will set up an API query for Harness to execute that returns a JSON response. Next, you will map the keys in the JSON response to the fields Harness needs to locate your metric values and host.
  • Metrics Data Provider - Select the Custom Verification Provider you added above.
  • Metric Collections - Click Add Metric Collection. The New Metrics Collection settings appear.
  • Metrics Name - Enter the name to use for your metric. This is not a Datadog value. It is simply the name used for metrics collected by Harness.
  • Metrics Type - Enter the type of metric, such as Infra. These are Harness types, not Datadog's.
  • Metrics Group - Enter the name to group your collected metrics with.
  • Metrics Collection URL - This is the API query that will return a JSON response.

The query for Metrics Collection URL follows this syntax:

query?query=<METRIC_NAME>{pod_name:${host}}by{pod_name}.rollup(avg,60)&from=${start_time_seconds}&to=${end_time_seconds}

The values in ${...} braces are placeholders used for querying the data. These are substituted at runtime with real values.

Replace <METRIC_NAME> in the query with the correct metric name. The metric names are available in Datadog Metric Explorer:

For example, to search for the kubernetes.memory.usage_pct metric, your query would look like this:

query?query=kubernetes.memory.usage_pct{pod_name:${host}}by{pod_name}.rollup(avg,60)&from=${start_time_seconds}&to=${end_time_seconds}
  • Response Type - Select JSON.
  • Response Mapping - In this section you will map the keys in the JSON response to Harness fields.
  • Transaction Name - Select Fixed or Dynamic depending on the transaction name. In our example, we will use Fixed. If you select Dynamic, you will see the Transaction Name Path and Regex to transform Transaction Name fields. The Transaction Name Path is filled out in the same way as Name below. You Regex to transform Transaction Name to truncate the value of the Transaction Name Path if needed.
  • Name - Enter a name to map to the metric name. For example, if the metric name is kubernetes.memory.usage_pct then use a name like KubeMemory.
  • Metrics Value - Run the query using Guide from an example to see the JSON response and pick a key to map to Metrics Value.

In Guide from an example, specify the time range and host for the query. To specify the time range, click in the ${startTime} and ${endTime} calendars.

To specify the ${host}, get the full name of a host from Datadog Metrics Explorer:

To copy the name, click in the graph and click Copy tags to clipboard.

Next, paste the name in the ${host} field.

Click Submit. The JSON results appear. Click the name of the field to map to Metrics Value.

  • Hostname JSON path - Use Guide from an example to query Datadog and return the JSON response. In the JSON response, click the key that includes the hostname path.
  • Timestamp - Use Guide from an example to query Datadog and return the JSON response. In the JSON response, click the key that includes the timestamp.

Now that the Metric Collection is complete, click UPDATE to return to the rest of the Metrics Verification State settings.

  • Baseline for Risk Analysis - Select Previous Analysis to have this verification use the previous analysis for a baseline comparison. If your Workflow is a Canary workflow type, you can select Canary Analysis to have this verification compare old versions of nodes to new versions of nodes in real-time.
  • Failure Criteria - Specify the sensitivity of the failure criteria. When the criteria is met, the workflow Failure Strategy is triggered.
  • Data Collection Interval - Specify the frequency with which Harness will query Datadog, The value 2 is recommended.

Click SUBMIT. Now the Datadog custom verification step is added to the Workflow. Run your Workflow to see the results.

Verification Results

Once you have deployed your workflow (or pipeline) using the Custom verification step, you can automatically verify cloud application and infrastructure performance across your deployment.

Workflow Verification

To see the results of Harness machine-learning evaluation of your Custom verification, in your workflow or pipeline deployment you can expand the Verify Service step and then click the APM Verification step.

Continuous Verification

You can also see the evaluation in the Continuous Verification dashboard. The workflow verification view is for the DevOps user who developed the workflow. The Continuous Verification dashboard is where all future deployments are displayed for developers and others interested in deployment analysis.

To learn about the verification analysis features, see the following sections.

Deployments

Deployment info
See the verification analysis for each deployment, with information on its service, environment, pipeline, and workflows.

Verification phases and providers
See the vertfication phases for each vertfication provider. Click each provider for logs and analysis.

Verification timeline
See when each deployment and verification was performed.

Transaction Analysis

Execution details
See the details of verification execution. Total is the total time the verification step took, and Analysis duration is how long the analysis took.

Risk level analysis
Get an overall risk level and view the cluster chart to see events.

Transaction-level summary
See a summary of each transaction with the query string, error values comparison, and a risk analysis summary.

Execution Analysis

Event type
Filter cluster chart events by Unknown Event, Unexpected Frequency, Anticipated Event, Baseline Event, and Ignore Event.

Cluster chart
View the chart to see how the selected event contrast. Click each event to see its log details.

Event Management

Event-level analysis
See the threat level for each event captured.

Tune event capture
Remove events from analysis at the service, workflow, execution, or overall level.

Event distribution
Click the chart icon to see an event distribution including the measured data, baseline data, and event frequency.

Next Steps


How did we do?