Deployment Architecture

The following diagram shows the deployment architecture of Proficy Historian for AWS. In this diagram:
  • Data Archiver, UAA, and PostgreSQL are deployed in an Elastic Cloud Compute (EC2) instance in a private subnet inside Amazon Elastic Kubernetes Service (EKS).
  • Amazon Elastic File System (EFS) is connected to Data Archiver.
  • Network Load Balancer (NLB), AWS collector instances, and the Web Admin console instances are in a public subnet.
  • EFS is in the Virtual Private Cloud (VPC), whereas CloudWatch and CloudTrail are outside the VPC. EFS sends archiver logs to CloudWatch, which you can use for analysis. CloudTrail is used to access events.
  • Collector 1 and Collector 2 are collector instances created on an on-premises Windows machine. Similarly, Excel Addin for Historian and Historian Administrator are installed on an on-premises client machine.
  • Collector 3 and Collector 4 are collector instances created on an EC2 instance in a VPC (can be a different VPC than the one in which the Historian server is deployed).
  • The Web Admin console is deployed as an AMI on an EC2 instance on a VPC (can be a different VPC than the one in which the Historian server is deployed).

This next diagram shows the high availability architecture:

How tag data is stored if using collectors of on-premises Proficy Historian (TLS encryption is not used):
  1. Collectors send a request to AWS Network Load Balancer (NLB) to write tag data.
  2. NLB sends the request to Data Archiver. If user authentication is needed, Data Archiver sends the request to UAA, which verifies the user credentials stored in PostgreSQL. After authentication, NLB confirms to the collectors that data can be sent.
  3. Data collected by the collector instances is sent to NLB.
  4. NLB sends the data to Data Archiver directly. After authentication, Data Archiver stores the data in EFS in .iha files.
How tag data is stored if using Historian Collectors for Cloud (TLS encryption is used):
  1. Collectors send a request to AWS NLB to write tag data. Since the request is encrypted, port 443 is used.
  2. NLB decrypts the request and sends it to Data Archiver. If user authentication is needed, Data Archiver sends the request to UAA, which verifies the user credentials stored in PostgreSQL. After authentication, NLB confirms to the collectors that data can be sent.
  3. Data collected by the collector instances is encrypted and sent to NLB using port 443.
  4. NLB decrypts the data and sends it to Data Archiver. After authentication, Data Archiver stores the data in EFS in .iha files.
How data is retrieved:
  1. Clients (that is, Excel Addin, the Web Admin console, the REST Query service, or Historian Administrator) send a request to NLB to retrieve data.
  2. NLB sends the request to Data Archiver, which retrieves data from EFS. If, however, user authentication is needed, Data Archiver sends the request to UAA, which verifies the user credentials stored in PostgreSQL. After authentication, data is retrieved from EFS.