Autonomy will be playing an increasingly important role in the world of transportation, from driving our cars to flying our aircraft. Underlying these autonomous systems are complex algorithms that in many cases have been derived using machine learning algorithms from training data.
How does one go about certifying an autonomous system for such safety critical applications? Traditional software verification methodologies rely on reasoning about code, but these techniques are not directly applicable to machine-learned algorithms which amount to an opaque representation of a complex mathematical function. And attempts to reason about these functions directly do not scale because of the nonlinear components in the representations of these functions. New techniques are needed to reason formally about such systems.
GE Research's Trusted Autonomy project will start by researching how airborne collision avoidance algorithms for small unmanned aircraft can be certified. We will use a deep neural network to represent the decision logic and then apply new formal techniques to prove safety properties about the system.
If such a system can be certified, it can enable safe operations of small unmanned aircraft beyond line of sight.
We aim to develop a general methodology that can support other applications, such as urban air mobility and air transport. A second focus of the project will be to study and implement Runtime Assurance Monitors that can be certified and can constrain the outputs of untrusted applications controlling safety critical systems. Formal verification techniques which are intractable for very complex algorithms such as machine learning and artificial intelligence can instead be applied to runtime assurance safety monitors, whose correctness can be formally proved.