What is the future of advanced computing and what does that mean for industrial companies? The National Research Council is preparing a broad base of recommendations, grants, and initiatives for high performance computing at the National Science Foundation for the years 2017-2020. GE is already making substantial investments in advanced, high performance computing, and presented visions for the future at the High Performance Computing and Data (HPCD) Workshop recently in Dallas, Texas. The workshop is preparing a final report which will be submitted to the NRC and NSF, but this article delivers a sneak preview of some of GE's perspectives on advanced computing for the years to come.

 

Groundbreaking computing architecture

Speeds, capacities, and capabilities have of course grown exponentially, but at a basic level, today’s computers still resemble the first room-sized mainframes. Virtually every computing device in the world can trace its designs back to the architecture defined by John von Neumann 70 years ago. Alternative approaches to the von Neumann architecture have historically been limited to academic and research settings, or to subcomponents of larger, conventional computers and processors. Consequently, the computer scientists of the past 70 years have almost exclusively focused on developing software algorithms and data structures that perform well on this design prevalent across mainstream commercial hardware.

Today’s chip designers face formidable barriers in improving performance due to energy consumption and heat dissipation limitations. Costly aspects of this dominant processor design relating to data transfers have stimulated innovation and resulted in new breakthroughs in non-von Neumann architectures. One novel approach of great interest is the field of neuromorphic computing, which develops architecture in a way that mimics the nervous system and brain. Building on insights into the strengths of neurophysiology, neuromorphic designs have proven particularly apt at tasks such as machine learning and sensory pattern recognition including vision and hearing.

Also growing in prominence and potential is processor-in-memory (PIM). As the massive growth of data has outpaced processing needs, the bottleneck between CPU and memory has become a problematic aspect of von Neumann’s architecture. Fetching data from memory is one of the most energy-expensive operations in digital electronics, and no matter how fast the CPU, it is limited by the lag (or “latency”) waiting for more data to arrive from memory. One developing approach to improve the performance of data intensive problems is to embed processing directly in the memory itself, notionally sending the code to the data rather than fetching the data into a central processor. This is particularly valuable for fine-grained parallel tasks where the code is small but the data very large, including image processing, systems simulations, and materials modeling.

There remains significant difficulty using these architectures because of their novelty itself, however.  Programming tools and languages have matured over the last half century for the conventional hardware designs. Thus, early adopters will need to be prepared to tolerate, nurture, or even create the algorithms, data structures, compilers, development environments, and other software needed to harness these emerging architectures. One technology of recent prominence that could be advantageous to re-imagining how these are developed is machine learning. Leveraging the combined strengths of artificial and human intelligence should dramatically accelerate the maturation of new programming models and enable greater agility mapping software to changes in the underlying hardware.

 

Impact on the Industrial Internet

Technology already plays a vital role in the design, manufacture, and testing of the machinery and conveyances that make today's industrial world work. Further advances in materials science will improve the performance and availability of valuable assets such as power plants and aircraft. Combining better materials design with more accurate maintenance and replacement schedules is key to delivering a safer and more reliable infrastructure. That’s where computer-driven models come in.

Building on data gathered from sensors throughout industrial machines, computer-driven models are an important tool to predict the impact of stresses and fractures that occur in the day-to-day operation of mission-critical components. By identifying performance anomalies and patterns, parts nearing the end of their performance life can be proactively replaced. This can improve availability and reduce unplanned downtime. Visionary modeling efforts, such as the Materials Genome Initiative, are working to accelerate the pace of discovery and deployment of advanced materials. Virtualizing designs based on digital representations of parts, components, subsystems, and the integrated machines, we can often foresee issues before manufacturing and assembling them physically. This reduces both the cost of development and the time it takes to go from idea to product.

As we look ahead to the future, high performance computing and data (HPCD) will play an important role in the advancement of design, manufacture, operation, and servicing of GE products. The needs of the broader scientific and engineering community represented in the NRC study share the same challenges and opportunities. Computational methods and data analysis have become an essential instrument in the modern practice of the scientific method, and investing in this infrastructure will be crucial for competitiveness in the years to come.

About the author

Rick Arthur

Senior Principal Engineer