Mirroring FAQs

  • What happens when a node that was down comes back? Does the data written to one get synched to the other?

    There is no automatic synching. If a node is down, the information to be written is buffered by the Client Manager, or if the Client Manager is down, it is buffered by the collector. When the node comes back, data is written to the data archiver.

  • There is only one Configuration Manager on the primary node. Can I still do configurations if the primary node goes down?

    No. If the Configuration Manager is not available, you can read configurations, as the collectors know about the tag information, but you cannot edit or modify configurations.

  • Is the Configuration Manager a single point of failure?

    Yes. If the primary node goes down, you cannot edit configurations but, since information about the configuration is stored in the registry of each client, the information is still available for reads and writes in the event of a primary node failure.

  • What happens if one mirror crashes in the middle of a read/write request?

    This operation continues to function in the same way as in prior releases. The Client Manager holds a copy of the message request; once the node comes back, the write operation resumes. Any read request that is sent will fail if the node goes down during the read.

  • The server where my primary node is installed is down. What is the expected behavior?

    The Web Admin and Web Trend Tool will not be available; you can look up tag configuration on the Historian Administrator (Windows), but you will not be able to edit tag configuration information. All other existing clients should continue to work as expected, with the ability to collect and store data, search for tags, trend and report on tag information. A new user connection with default Historian server set to primary must connect to the primary node to get information about all the mirrors before it gains the ability to automatically failover to mirror when the primary node is down.

  • The Client Manager on the primary node is down, but the server is running. What is the expected behavior?

    The Web Admin and the Web Trend Tool along with all other existing clients will work as expected with the ability to do configuration changes, collect and store data, search for tags, trend and report on tag information. A new user connection with default Historian server set to primary must connect to the primary node to get information about all the mirrors before it gains the ability to automatically failover to mirror when the primary node is down. .

  • One of the data archivers is down, but at least one is active. What is the expected behavior?

    The system should continue to function as designed. The Web Admin, Web Trend Tool, and Historian Administrator (Windows), as well as other clients should continue to work as expected, with the ability to collect and store data, search for tags, trend and report on tag information.

  • If there are calculated tags on a multi-node system, are the calculations done on all nodes?

    Yes.

  • Are Historian tag stats created independently? Can they be different between different nodes?

    Yes. These are queries, not tags, to a specific Data Archiver. As writes are independent, one Data Archiver may be ahead of another, so the stats may vary slightly.

  • How do we ensure that the data is consistent across data archivers?

    Tag information is consistent; there is only one tag. The time stamp and value are sent to all mirrors.

  • Are there specific log files that I should be looking for to help diagnose issues with mirror failure modes?

    No changes were made to the logs for data archiver; however, there are new log files for Client Manager and Config Manager.

  • There are now two *.ihc files: *config.ihc and *CentralConfig.ihc. What is the difference between the two?

    *CentralConfig.ihc is the overall master config used by the Configuration Manager. The *config.ihc is used by the Data Archiver and is generated from *CentralConfig.ihc. This was done to maintain consistency between Historian versions. To maintain configurations between versions or Historians, refer to Reusing an archive configuration file in the Historian eBooks.

  • With mirroring, is Microsoft Cluster Server still supported? What is the recommended approach?

    Mirroring is offered as a Microsoft Cluster Server replacement as an HA offering for Enterprise Historian. Running in MCS has not been tested nor validated to date with mirrored Historian systems.

  • Must SQL Server be installed in a system with mirrors?

    No. SQL Server is only required for AEDB.

  • How does mirroring work with SQL AE logging?

    There is still an alarm archiver; it doesn't go through the Client Manager, so it talks to SQL as before.

  • How does AE fit with their synching?

    There is one database, so everyone talks to the same SQL database. You can cluster the database, but that is separate from mirroring.

  • How does mirroring work in a workgroup environment or non-domain?

    Mirroring is not supported in Workgroups.

  • Are there any issues when making changes in the Historian Administrator and a mirrored system?

    You must establish a mirror using the Historian Web Admin Console, but compatibility with all APIs has been maintained. Therefore, you can make tag changes in either the Web Admin or the VB Windows Admin, and those changes will show up in both Admins.

  • Are there any plans to add more than three mirrors?

    No performance benefits have been seen beyond three mirrors.

  • Do redundant collectors behave differently in mirrors?

    No, there should not be any difference in behavior.

  • Are there any conflicts when using Port 14000 for Historian to Historian communications? For example, Site to Corporate?

    No. Client Manager is now on Port 14000, Data Archiver is on Port 14001, and the Configuration Manager is on Port 14002.

  • If load balancing uses round robin reads, does the cache need to be loaded separately on both machines, and will it decrease performance?

    It does require more memory. The Client Manager makes the decision on where to send the messages, and it knows about configuration. There is some overhead, but it is overcome by having multiple data archivers to service multiple requests. That is why there is a 1.5X improvement with two mirrors, instead of 2X.

  • Are there any additional considerations if Mirroring is being used with other GE apps like Workflow or Plant Apps?

    No, it still looks like one Historian to other outside systems.

  • Is the store and forward feature also used in mirroring?

    Yes. This is a feature of the Collector and is independent of mirroring. Once the message is given to the Client Manager, it is done. If the Client Manager can't reach one of the Data Archivers, it buffers the request until the Archiver is available.

  • In a mirrored environment, do existing queries and reports work the same?

    Yes. Everything works the same as it did before. It sees it as a single Historian and communicates over the same ports through the same API.

  • Does the Historian OPC HDA server still work in a mirrored environment?

    Yes.

  • If data is being written to two Data Archivers, does this double the traffic from the collector?

    No. It does not double traffic from the collector; it sends a single message to the Client Manager. The traffic is doubled between the Client Manager and the two Data Archivers.