Sample Analytics

Sample Analytics

Download and test sample analytics on the Application Analytics UI.

Note: Use these samples only in a demo environment. These samples are not intended for use in active tenants that are in production use as these contain sample data only.
Table 1. Different Types of Sample Analytics
Sample DescriptionSample ZIP Download
Simple Expression Evaluator Analytics Projecthttps://apm-application-help-rc.int-app.aws-usw02-pr.predix.io/apm-simple-expression-evaluator-v1.zip
Java Threshold Analytics Projecthttps://apm-application-help-rc.int-app.aws-usw02-pr.predix.io/apm-threshold-analytic-java.zip
Matlab Threshold Analytics Projecthttps://apm-application-help-rc.int-app.aws-usw02-pr.predix.io/apm-threshold-analytic-matlab.zip
Python Threshold Analytics Projecthttps://apm-application-help-rc.int-app.aws-usw02-pr.predix.io/apm-threshold-analytic-python.zip
Sample Output JSONhttps://apm-application-help-rc.int-app.aws-usw02-pr.predix.io/SampleOutput.json

Generate Analytic with Alerts

Steps to upload and deploy a sample analytic with alerts on Predix runtime.

Before You Begin

This procedure assumes that the following prerequisite tasks have been completed.

Procedure

  1. Add the alert template
    1. Select Alert Templates.
    2. Select + to add new template.
    3. Enter Analytics in the Alert Template field.
    4. Select Save.
  2. Upload the analytics template to the catalog.
    Configure the following information for the analytic.
    OptionDescription
    RuntimePredix
    NameAlertKVPair_Analytic_Sample
    OwnerYour Name
    Analytic TypeJava
    Type Version1.8.
    Analytic Filehttps://apm-application-help-rc.int-app.aws-usw02-pr.predix.io/apm-threshold-analytic-KVjava-1.0.1-SNAPSHOT.jar
    Analytic Version1.0.0
    Primary CategoryMonitoring
  3. In the Analytic Template, configure the input definition, constant, and output definition through CSV upload and select Save.
  4. Enter Analytics in the Output Events field and select Save.
  5. Add and configure the deployment as follows:
    1. Enter AlertKVPair_Deployment in the Deployment Name field and then select Submit.
    2. In the 1. Asset Selection step, select the asset defined in the analytic, and then select Save.
    3. Select Next to access the 2. I/O Mapping step.
    4. Select the Tag drop-down menu and then, select Add Tags....
    5. In the tag browser, search for the specific temperature tag to map to the input temp_today in the analytic (for example TAG_Temperature_ID14. After the search displays the tag, drag and drop it onto the input for mapping it.
    6. Select Save and Next to save the I/O Mapping configuration.
    7. In the 3. Schedule step, leave the selection at Only Once for Define how often a new run will be executed option.
    8. Select Time Span between starting from when the timeseries data is available for the input tag. For example, if the starting date is since May of last year to can select May 1, 2017 to current date.
    9. Leave the Sample Interval at the default value of 1 Minute.
    10. Select Save and then select Deploy.
    The deployment is saved to the Predix runtime. After successful deployment the status updates to Run Once.
  6. Claim the analytic alert in the Alerts module.
    1. Select Alerts > Unclaimed > .
    2. Filter alerts by Time Received.

Predix Essentials Port to Field Map for Predix Analytics

An analytics port-to-field map simply creates a mapping derived from the analytic template configuration such as input definitions, output definitions, and output events to tell the runtime engine to connect to the respective data sources to fetch inputs and write outputs. The port-to-field map is itself a JSON structure.

Type: PortToFieldMap

See the following table for a description of the elements in a PortToFieldMap.

FieldDescription
analyticNameTemplate name as defined in Predix Essentials.
analyticVersionAnalytic version provided at the time of analytic upload or creation.
comment(Optional) Informational only.
orchestrationStepIdApplies to analytic orchestration. It is an auto generated ID for the orchestration step.
iterationsSupports multiple iterations for the same analytic. Creates an entry per iteration.
Example PortToFieldMap JSON

The following example .json represents the port to field map data for per port time series array.


{
    "comment": [
        "/pxDeployments/23dabd15-ea98-47ba-aec5-c8adb59b630d",
        "e3692835-411b-4a1a-82cb-c88e73f98c53",
        ""
    ],
    "analyticName": "Shared_timestamp_sample_analytic",
    "analyticVersion": "1.0.0",
    "orchestrationStepId": "sid-fb5e38a2-c5fe-4f8a-84d8-2428d7f7361a",
    "iterations": [
        {
            "id": "0",
            "inputMaps": [
                {
                    "valueSourceType": "DATA_CONNECTOR",
                    "fullyQualifiedPortName": "data.time_series.input1",
                    "fieldId": "Shared_timestamp_sample_analytic_DeploymentStep1_1_input_input1",
                    "queryCriteria": {
                        "start": "${START_TIME}",
                        "end": "${END_TIME}"
                    },
                    "dataSourceId": "PredixTimeSeries"
                },
                {
                    "valueSourceType": "DATA_CONNECTOR",
                    "fullyQualifiedPortName": "data.time_series.input2",
                    "fieldId": "Shared_timestamp_sample_analytic_DeploymentStep1_1_input_input2",
                    "queryCriteria": {
                        "start": "${START_TIME}",
                        "end": "${END_TIME}"
                    },
                    "dataSourceId": "PredixTimeSeries"
                },
                {
                    "valueSourceType": "CONSTANT",
                    "fullyQualifiedPortName": "data.constants.threshold",
                    "value": 10
                }
            ],
            "outputMaps": [
                {
                    "fullyQualifiedPortName": "time_series.output1",
                    "fieldId": "Shared_timestamp_sample_analytic_DeploymentStep1_1_output_output1",
                    "dataSourceId": "Temporary,PredixTimeSeries"
                },
                {
                    "fullyQualifiedPortName": "time_series.output2",
                    "fieldId": "Shared_timestamp_sample_analytic_DeploymentStep1_1_output_output2",
                    "dataSourceId": "Temporary,PredixTimeSeries"
                }
            ],
            "inputModelMaps": []
        }
    ]
}

Sample Data Map for Analytic Outputs

When you execute a deployment, the system enables you to download a copy of the <asset_sourcekey>.json file to your hard drive that contains the analytic json files for every asset that the analytic runtime ran the analytic:
Table 2. Deployment JSON Files
This JSON fileIs related to this
<asset_sourcekey>.jsonOutput of the analytic execution data and the status of the deployment execution.
Note: You can receive multiple files in the ZIP file.
<asset_sourcekey>.json

The <asset_sourcekey>.json file comprises execution parameters, execution status, and alerts and time series data output given by the analytic. There is one <asset_sourcekey>.json file for each asset for which the analytic has been executed.

Table 3. Output JSON Structure
JSON
Deployment Metadata
"analyticName" :
"ThresholdPassthrough4",
"analyticUri" : "/
analyticEntries/55a6617ea614-
4add-93c4-1b8790341546",
"ioMappingUri" : "/ioMappings/
014fd31f-a95e-4ea2-a108-
d060ad44846d",
"entitySourceKey" : "56002",
"entityName" : "56002",
"ioMappingName" : "Deploy3",
"deployStatus" : "Run Once",
"filenameToDataObjects" : { }
"deployParams" : {
    "hashCode" : -1219813131,
    "deploymentName" : "testVal3",
    "startTime" : "1591815000000",
    "endTime" : "1591901400000",
    "scheduleStartTime" : "Next Available",
    "timeZone" : "Site Local",
    "shouldRetry" : true,
    "startDateType" : "Deployment Date",
    "startDate" : "1591901445858",
    "endOffset" : {
      "timeUnit" : "minutes",
      "timeValue" : 0
    },
    "maximumDataInterval" : {
      "timeUnit" : "minutes",
      "timeValue" : 1440
    },
    "deployType" : "DeployOnDemand",
    "samplingType" : "Raw",
    "deployResult" : {
      "hashCode" : 0,
      "deployStatus" : "Run Once",
      "deployedAnalyticVersion" : "1.0.0",
      "isDirty" : false
    }
Output Metadata
  "output" : {
          "time_series" : {
            "time_stamp" : [ ],
            "mean" : [ ],
            "deviation" : [ ]
          },
          "alerts" : {
            "date" : [ ],
            "score" : [ ],
            "sensor" : [
              "temp_today"
            ]
          }
        }
Output Alerts
{
          "alerts" : {
            "date" : [
              1468535940000,
              1468536000000,
              1468536060000,
 
            ],
            "score" : [
              "244.8890076",
              "241.0489502",
              "243.3664856",

 

            ],
            "sensor" : [
              [
                "temp_today"
              ],
              [
                "temp_today"
              ],
              [
                "temp_today"
              ]
            ]
          }
Output Timeseries
 "time_series" : {
      "time_stamp" : [
        1501574800001,
        1501594800001,
        1501600000001,
        1501614800001,
        1501634800001,
        1501654800001,
        1501674800001
      ],
      "mean" : [
        105.0,
        75.0,
        90.0,
        100.0,
        175.0,
        75.0,
        1160.0
      ],
      "deviation" : [
        1.0,
        1.0,
        1.0,
        1.0,
        1.0,
        1.0,
        1.0
      ]
    }
  }

The date, score, and sensor blocks are related. The first elements from these three blocks are used to create the first alert; the second elements are used to create the second alert, and so on.

The time_stamp block is related to all other blocks in the time_series block. The first element in the mean block corresponds to the first element in the time_stamp block; the second element in the mean block corresponds to the second element in the time_stamp block, and so on.

For each entry in the mean block, there is one entry in the time series database. The tag id for each data point is in the <asset_sourcekey>.<analytic_name>.<output_def_name> format.

For example, for the values [asset_sourcekey = 399, analytic_name = "analytic1", output_def_name = "mean"], the tag id is "3999.analytic1.mean".

The following entries are available in the time series database after the execution for output_def_name = "mean", asset_sourcekey = 3999, analytic_name = "analytic1":
Table 4. Time-series Database Entries
tagIdtimestampvalue
3999.analytic1.mean143082900000033.380000000000003
399.analytic1.mean143076100000047.0
.........

Similarly, there are data point entries for the deviation block as well.

Figure: Asset JSON Input


Figure: Analytic Deployment Status
The following are possible status and results:
  • PROCESSING
  • DEPLOYED
  • FAILED