The rise of applications has brought many exciting new opportunities, but it has also introduced some new engineering challenges. Cloud-based solutions are built to scale in a particular ways - data is optimized for transit, intermediary security protections consume processing power in unique ways, and user management is designed to isolate data for privacy protection.
What happens when you have built a scalable, multi-tenant cloud application that you now need to deploy on-premise? The very data architecture must change. You can't simply plop the cloud application into an on-premise, single tenant environment and expect performance. It's further complicated by the fact, you also need to update the data architecture in a way that is easy to maintain parallel integrity with future cloud updates. At GE Digital we love to tackle hard problems. Our engineers have been working to solve this very problem.
Before diving into how we solved for this challenge, it's worth noting why we took on such a big task. While cloud deployment is safe and effective, there are some circumstances where an on-premise implementation is required. We see this most often in highly regulated environments where data is required to stay on-location or where government regulations require data not cross country lines.
The team took a methodical approach to addressing the four engineering challenges created in this scenario.
Leaping over these obstacles was no easy feat but with careful planning and a clear vision the team has made remarkable progress. It started with a commitment to learn from the past and a drive to integrate the future vision of our customers. The result, an anytime, anywhere deployment with one codebase solution. Below we breakdown how that was made possible.
To avoid a large volume of data transmission the application was architected to optimize data loading. Only the required data for a particular task are pre-loaded and the rest of the data is loaded based on user actions. This approach provides better application performance, responsiveness and application stability.
Avoiding duplicate code was the next challenge the team tackled. A single codebase framework has been built which allow for enables/disables /adds/removes/changes of options based on the deployment target. The application dynamically changes its behavior based on the configuration environment, so only one code base must be maintained.
The team also implemented a creative configuration-driven Continues Integration and Continues Deployment (CI/CD) pipeline process. This process performs end-to-end verification and validation and then based on configuration, it deploys the application. The deployment pipeline steps remain independent, and it leverages the one code base framework to modify the application behavior and deployment configuration parameters to support the target deployment and minimize or eliminate the manual intervention.
Development wasn't the only consideration. Ensuring a consistent UI/UX experience was important for our customers. To achieve this the team has followed a model-view-controller (MVC) application design pattern, where the application’s presentation layer is re-architected using an adapter pattern. The UI layer interacts with the adapter layer to provide the backend services information based on the deployment configuration. In short, the Presentation Layer (UI) remains consistent and the backend layer changes based on the deployment. In this manner, the complexity is hidden from users who are simply presented with a familiar interface.
To learn more about the types of solutions these technology advances allow, explore GE Digital's Asset Performance Management.
Standardize the collection, integration, modeling, and analysis of disparate data into a single, unified view.
Ensure asset integrity and compliance by monitoring changing risk conditions.
Achieve less unplanned downtime by predicting equipment issues before they occur.
Explore our customer stories and read how Asset Performance Management improves operational performance.