Spike Logic

When compression is enabled in the Historian archive, only the first instance in a series of data falling within a deadband range will be collected to the Historian archive. When that data is charted using interpolation, false values are inserted into the chart to create a smooth trend between intervals in a given time period. In most cases, interpolation gives a reasonable portrayal of the actual data for a given time period.

Unfortunately, in the event of a spike in data values, an unrealistic set of samples is created when the data is charted. Instead of showing the results of compression (the same values over a series of intervals), a rising or falling slope is created in the chart. This gives the impression that values for a given time stamp are higher or lower than they actually were. The figure below shows the difference between the raw data for a series of samples, and how the samples would be charted if data compression were enabled, assuming all values between 10 and 20 are in the deadband range.



Spike logic monitors incoming data samples for spikes in a tag's values. If spike logic is enabled, a sample of equal value to the previously archived sample is inserted into the archive in front of the spike value. The time stamp of the inserted value is determined by your polling interval. If samples are collected at 1 second intervals, the inserted sample's time stamp will be 1 second before the spike. This helps to clearly identify the spike, and retains a more accurate picture of the data leading up to it, as shown in the following figure.



Spike Logic has two configurable options: Multiplier and Interval. The Multiplier option specifies how much larger a spike value must be than the deadband range before spike logic will be invoked. For example, if a value of 3 is entered in the Multiplier field and the deadband percentage was set to 5%, spike logic will not be invoked until the difference between the spike value and the previously archived data point is 15% of the EGU range.

The Interval option specifies how many samples must have been compressed before spike logic will be invoked. For example, if the Interval field is set to 4, and 6 values have been compressed since the last archived data sample, spike logic will be invoked.