About the Proficy Historian Overview Objects

The Overview objects are the counters that measure the samples collected and sent by the Data Archiver. You cannot use the counters to perform the following actions:
  • Measure the performance of a specific read.
  • Track the reads of a specific client or program.

The Overview objects are preferred to measure and describe a system. It is calculated as the sum total of the numbers in each instance of a Data Store. After you understand the Overview object, you can identify the most active Data Store by using the associated counters.

The performance counters are more useful because:
  • You cannot access the read rates in the administrator UI.
  • The write rate is updated only once in a minute. The counters are updated in real time making it much easier to see exactly when a problem began.
  • The administrator UI shows only the data in the last 10 minutes but the counters are displayed over a longer time period to locate active times.
  • You can access the counters in relation to the non-historian counters in the same trend.
  • The counters are accessible when you cannot access the administrator UI due to performance or security reasons.
The reads vary based on the load on the Data Archiver.
Note: The load on the Data Archiver does not depend only on the number of read calls. The load increases with the increased number of tags, archives, and the raw samples. You can monitor some of these activities using the counters.
You can use the Overview object to measure the following:
  • Number of samples examined internally with respect to the number of samples returned to a user at a given time
  • Inconsistency of reads and writes in a day, week, or month
  • The number of out-of-order writes during a given time range
  • The average number of samples examined per read call
Counter NameDescription
Read Rate (Calls/min)The number of user or program initiated read calls processed over the last minute
Read Raw Rate (Samp/min) The number of raw data samples examined internally over the last minute in response to read calls
Read Samp Rate (Samp/min) The number of raw data samples returned to external programs over the last minute in response to read calls
Note: The counts and rates provided in the above table are generic across the Data Archiver. The counts and rates do not provide detailed information, such as the reason of the time taken by a read (for example, 8 seconds) and the activities where the time was spent. However, the counts and rates can describe the reason of a scenario, when the same read criteria takes different time frames in two different days. If there are more reads or writes happening in the Data Archiver, the read criteria takes more time.

Comparing Read Raw Rate and Read Samp Rate

Run a query for a month average of 200 tags that have collected data in every second. Assume that you stored the data in one day archives. The Data Archiver has to examine a large number (200 tags x 60 seconds x 24 hours x 30 days) of raw samples that are spread across 30 one day archives to produce only 200 returned samples. If the query is run in 1 minute without any error, the Read Raw Rate value is a large number and the Read Samp Rate value is 200.

Run the same query with samplingmode Raw By Time. The Read Raw Rate shows the same value because the same number of raw samples were examined. As the query results returned to the caller, the Read Samp Rate = Read Raw Rate.
Note: The number of archives examined is not reflected for the counter. You will get the same Read Raw Rate and Read Samp Rate if you have a 30 days archive instead of a 31 days archive.

You cannot only look at the number of write calls with reads. A write can have samples for multiple tags and the timestamps on the data can affect the number of archives accessed by a write. A collector can typically write data for all its tags, but with the same timestamp and the write call can access the same archive. A migration program can write two years of data for a tag, which can access many archives.

The following table provides the counters that have a rate over the last minute. These counters describe the data write activity in the Data Archiver.
Counter Name Description
Write Rate (Average) The number of raw samples received from the external programs in the last minute
Write Rate (Max) The highest number of Write Rates (average) after starting the Data Archiver
The following table provides the total counters after starting the Data Archiver. If the Data Archiver runs for a long time, the counters are set to zero.
Counter Name Description
Writes (Expensive) The total number of raw samples that are expensive writes after the Data Archiver started
Writes (Total Failed) The total number of data samples that failed to be stored after the Data Archiver started
Writes (Total) The total number data samples stored to IHA files after the Data Archiver started
Writes (Total OutOfOrder)The total number of data samples written out of time order after the Data Archiver started. The number only includes the successful writes, and performs slower than when the data is in time order.
Note: Although some counters are rates and some are totals, all the counters are in units of data samples.

Comparing the number of Raw Samples Read and Written

The Write Rate (Average) is the write equivalent to the Read Samp Rate. You can compare the two counters to see if more reads or writes per minute are created in your Data Archiver.

Understanding the varying load on the Data Archiver

Trend the Write Rate (Average) and Read Samp Rate over a 24 hour period. You may see certain times of the day where the load varies, such as when reports are run, or a collector has a store and forward flush, or data is recalculated with Calculation Collector. Access the data available for a month. A system used for compliance or billing will have a very low read rate until you run the report till the end of month. Compare the value to a system used for real time, auto updating trending. That system will have a more consistent read load throughout the month.

Calculating the rate of out of order writes during a given time range

Out of order data writes are only exposed as a count, not a rate.

You can compute the number of out of order writes between a specific time (for example, between 3:15pm and 3:25pm) by getting the value of Total Out of Order at each timestamp and subtracting the value. You can convert the value to a rate per minute by dividing the value by 10 minutes.

The measurement is necessary because there are occurrences of out of order data in many systems. There is a base rate of out of order data for the system. If the system has intermittent changes in write performance, you can calculate the out of order rate during those times and compare the data to the base rate.

Calculating samples examined per read

As both the number of read calls and number of samples examined are exposed, you can divide to get the number of samples examined per read. In some systems, the number is near one, which indicates many small reads while the calculation collector does many current value reads that examine one sample and return it. The samples per read will also be one, if you query raw data, such as when replicating data. An analytic program can summarize the data into 5 minute averages. For one second uncompressed data, the value is 300 samples examined per read. The number is an overall system wide number so it will not be useful to troubleshoot one read.