In my previous blog on rethinking data management and the cloud, I covered the powerful outcomes of leveraging the data value chain from the control system to the cloud. That is just the start; there are other possibilities starting to emerge as well. As we start to facilitate a greater and greater flow of data and the right data into the cloud, we’re starting to analyze, and our ability to ask “what-if” questions really goes up.
Because we’re starting to see more and more data available in the cloud, we’re able to model our assets and drive predictive analytics, most notably in the area of early anomaly detection in asset performance. We are able to see deviations from normal or desired asset behavior long before it becomes visible to standard operational systems.
The challenge, of course, is that early anomaly detection doesn’t tell you exactly what’s wrong. It doesn’t give you a fast route to resolution; you have to then say, well there is an anomaly. Let’s go back and schedule some tests, let’s look at the data in a more detailed basis, let’s go from the machine learning and prognostics here, and dive down into the data.
And when you do that, what you start to do, what our customers do, is they generate what are called “cases.” And they start to build up cases over time of what we refer to as “catches”—where you then start to build a knowledge base of data. With all of these signatures of bad behavior, you start to then aggregate a great deal of knowledge.
We’re managing the data more effectively from one tip of the enterprise to the other, the wingspan of the enterprise. From wing to wing, we’re able to now say, well not only do I want that raw data here for what-if analysis, but I also want to bring the aggregate set across many different customers of these catches as well and mine the data. So not only do I ask, “What if?” I can also ask, “What does this look like?”
And what that means is that a given customer does not have to have seen a given anomaly before in order to take action quickly when a new (to them) anomaly crops up. For example, if any other GE SmartSignal solution customer who also captures a case and then uploads that into the shared knowledge base has seen that, all other customers who utilize our anomaly detection service receive the benefit of that learning. With that, we are able to understand and really cut down the cycle time around what we and our customers should do on the basis of that anomaly and change what we’re doing out in the field.
Therefore, we can all leverage a tremendous boost in productivity, and a huge decrease in our operations and maintenance budgets. It also increases our ability to manage deployed capital, because for example, if we have a better understanding of all the potential anomalies on the basis of our machines’ conditions, we can change the way that we think about our spares strategy.
The power of combining data management and the cloud is really better than if we were only looking at analytics in the cloud, or if we were only looking at data management in the field. We can optimize the data flow from the asset to the analytics, create operational models, and perform different what-if analysis. And then we can ultimately push those models fundamentally back down into the asset structure itself, and have the ability to change the behavior and close the loop on how we automate the behavior of those assets.
As a result, we start to see a system that works well together, from the control system to the cloud, whereby our industrial data flows naturally back and forth across the different layers of responsibility within the enterprise. So it’s not to optimize any one of those different parts but the enterprise as a whole—from wing to wing for the best operational performance possible.