FAQs on Installing Historian in a Distributed Environment

  • What happens when a node that was down is up and running? Is the data written to one node synchronized with another?

    There is no automatic synchronization. If a node is down, the information to be written is buffered by Client Manager, or if Client Manager is down, it is buffered by the collector. When the node is up and running, data is written to the data archiver.

  • There is only one Configuration Manager on the primary node. Can I still configure if the primary node is down?

    No. If the Configuration Manager is not available, you can read the configuration (because this information is stored in the collectors), but you cannot edit or modify the configuration.

  • Is Configuration Manager a single point of failure?

    Yes. If the primary node is down, you cannot edit the configuration. However, since information about the configuration is stored in the registry of each client, the information is still available as read-and-write-only when the primary node is down.

    If the Configuration Manager service is down, you cannot query tags and data in a horizontally scalable system. However, you can query tags and data in the following scenarios:
    • The Historian system contains only one node, which is installed as the primary mirror Historian server.
    • The Historian system contains only one mirror location, and there are no data stores in the distributed locations.
  • What happens if a node crashes in the middle of a read/write request?

    The operation continues to function in the same way as in prior releases. Client Manager holds a copy of the message request; therefore, once the node is up and running, the write operation is resumed. However, read requests will fail.

  • The server where my primary node is installed is down. What is the expected behavior?

    The Web Admin console and Trend Client will not be available; you can access tag configuration using Historian Administrator, but you will not be able to edit tag configuration. All other existing clients continue to work as expected, with the ability to collect and store data, search for tags, trend and report on tag information. A new user connection with default Historian server set to primary must connect to the primary node to get information about all the nodes before it gains the ability to automatically failover when the primary node is down.

  • Client Manager on the primary node is down, but the server is running. What is the expected behavior?

    The Web Admin console and Trend Client, along with all other existing clients, will work as expected with the ability to do configuration changes, collect and store data, search for tags, trend and report on tag information. A new user connection with default Historian server set to primary must connect to the primary node to get information about all the mirrors before it gains the ability to automatically failover when the primary node is down.

  • One of the data archivers is down, but at least one is active. What is the expected behavior?

    The system should continue to function as designed. The Web Admin console, Trend Client, Historian Administrator, as well as other clients continue to work as expected, with the ability to collect and store data, search for tags, trend and report on tag information.

  • If there are calculated tags in a distributed environment, are the calculations done on all nodes?

    Yes.

  • Are Historian tag statistics created independently? Can they be different between different nodes?

    Yes. These are queries, not tags, to a specific data archiver. As writes are independent, one data archiver may be ahead of another, so the statistics may vary slightly.

  • How do we ensure that the data is consistent across data archivers?

    Tag information is consistent; there is only one tag. The time stamp and value are sent to all the nodes.

  • Are there specific log files that I should be looking for to help diagnose issues with failure modes?

    No changes were made to the logs for data archiver; however, there are new log files for Client Manager and Configuration Manager.

  • There are now two *.ihc files: *config.ihc and *CentralConfig.ihc. What is the difference between the two?

    *CentralConfig.ihc is the master configuration file used by Configuration Manager. The *config.ihc file is used by the data archiver and is generated from *CentralConfig.ihc. This is to maintain consistency between Historian versions.

  • With mirroring, is Microsoft Cluster Server still supported? What is the recommended approach?

    Mirroring is offered as a substitute to Microsoft Cluster Server. Mirroring provides high availability for locations. Microsoft Cluster Server has not been tested or validated to date with Historian systems.

  • Should I install SQL server in a distributed environment?

    No. SQL server is only required for the Historian Alarms and Events database.

  • How does mirroring work with Historian Alarms and Events SQL logging?

    There is still an alarm archiver; it does not go through Client Manager, so it connects with SQL as earlier.

  • How does Historian Alarms and Events fit with their synching?

    There is one database, so everyone talks to the same SQL database. You can cluster the database, but that is separate from mirroring.

  • How does mirroring work in a workgroup environment or non-domain?

    Mirroring is not supported in Workgroups.

  • Are there any issues when making changes in Historian Administrator and a mirrored system?

    You must establish a mirror using the Historian Configuration Hub, but compatibility with all APIs has been maintained. Therefore, you can make tag changes in either the Web Admin or the VB Windows Admin, and those changes will show up in both Admins.

  • Are there any plans to add more than three mirrors?

    No performance benefits have been seen beyond three mirrors.

  • Do redundant collectors behave differently in a distributed environment?

    No.

  • Are there any conflicts when using port 14000 for Historian to Historian communications (for example, Site to Corporate)?

    No. Client Manager is now on port 14000, data archiver is on port 14001, and Configuration Manager is on port 14002.

  • If load balancing uses round robin reads, does the cache need to be loaded separately on both machines, and will it decrease performance?

    It does require more memory. Client Manager decides where to send the messages, and it knows about the configuration. There is some overhead, but it is overcome by having multiple data archivers to service multiple requests. That is why there is a 1.5X improvement with two mirrors, instead of 2X.

  • Are there any additional considerations if a distributed system is used with other GE applications such as Workflow or Plant Applications?

    No. It still looks like one Historian to other applications.

  • Is the store-and-forward feature also used in a distributed environment?

    Yes. This is a feature of the collector. Once the message is sent to Client Manager, it is done. If the Client Manager cannot reach one of the data archivers, it buffers the request until the archiver is available.

  • In a distributed environment, do existing queries and reports work the same?

    Yes. Everything works the same as it did before. It sees it as a single Historian and communicates over the same ports through the same API.

  • Does the Historian OPC Classic HDA server still work in a distributed environment?

    Yes.

  • If data is being written to two data archivers, does this double the traffic from the collector?

    No. It does not double traffic from the collector; it sends a single message to Client Manager. The traffic is doubled between the Client Manager and the two data archivers.