Consider gas turbines or anything rotational with ferrous material in it. Making them “smart” is anything but easy. As the rotating components do their job, a magnetic field is produced. Now add to that equation the windings designed to capture those directional magnetic fields and generate electricity. What you have is a hostile environment, at least from an emissions perspective. The strength of those EMF radiations is such that many sensors fail, or simply cannot do their job, which leads the engineers to making them more and more rugged. This adds weight and size, which to a gas turbine installed in a power plant has little impact, but for a jet engine this increases fuel burn, reduces efficiency, and in all instances increases cost.

Balancing additional sensory equipment with cost and weight targets can be likened to the Schrodinger’s Cat thought experiment: One cannot measure an experiment without affecting the result. Add more sensing equipment into the hot section of a jet engine and you now impact the gas path, temperature characteristics, and potentially efficiency. Thus a traditional trade-off exists, as it has long before the Industrial Internet was even conceived, where compromises are made such that just enough data is made available to infer the health of the asset. The onboard health monitoring systems must be capable of processing this data and enabling the asset to maintain operation within its design envelope. As soon as the asset is detected to be moving outside of tolerance, it will gracefully degrade operations until such time as maintenance can be performed to resolve the underlying issues. Notification of such events are monitored by global 24x7 operations centers that can dispatch maintenance crews when needed.

The automation involved in making that possible requires a fairly sophisticated infrastructure in place to enable it, including cloud environments with massive amounts of data collocated. As the data and analytics grow, so do the needs of the infrastructure to power it all. If we were to take an alternative approach—one more distributed—then we could realize the same or greater value just by better using the assets we already have. Typically when say a jet engine is designed, and the engine monitoring and control systems are specified, there is usually some percentage of free capacity baked in to allow for future growth. Put another way, we have a lot of processing capacity flying around not being used. 

Through virtualization techniques where we can isolate programs to their own memory and processing locations, we could yield additional clock cycles out of these devices in much the same manner as the virtualization of data centers enabled higher utilization of physical server assets. Now with that spare capacity, place some of our intelligent analysis techniques right next to where the data is being collected. That’s a game changer. Instead of sending vast amounts of raw data to a central intelligence system you can send results or insights gained from the analytics run at source. Now your impressive ground infrastructure can be used to take the business value to the next level by processing intelligence a layer above the raw data.

Taking things a step further, the authority to make some decisions and take some actions could be delegated to the edge assets, leaving only the most complex or intensive processing to the sophisticated systems on the ground. The engines now in effect have a mind of their own. Though regulatory restrictions will certainly need to be considered beyond what was the case in commercial and desktop environments, embedded virtualization techniques are being developed that, once safety certified, can start making machines smarter. As the MISRA coding standards did for C and C++ in the automotive environment, so newer, higher order languages will become prevalent in safety-related systems and one more barrier to innovation is removed for machines becoming brilliant. 

About the author

Rich Phillips

Chief Architect—CTO Office, GE Aviation