Safety in the Hands of Trusted Machines
Imagine a future where machines can anticipate our needs and meet them without the need for direct control. A world where they can accommodate our biological strengths and weaknesses by augmenting not only our physical abilities, but our attention, intelligence and creativity as well. This is the path of autonomy.
Over the next 100 years, machines will carry many of the same cognitive abilities as humans. They will be able to learn and grow, taking advantage of the best qualities of subsymbolic AI such as Generative Adversarial Networks (GANs) and the best logical AI systems such as Planning and Attention. This hybridization will finally begin to address Moravec’s Paradox and usher in the third wave of AI.
The Robotics & Autonomy team at GE Research works closely with all of GE’s businesses and key partners across the globe to envision, shape and build robotic and autonomous systems from idea to commercialization. Our autonomous landscape spans from power plants and wind farms to refineries and hospitals.
The team is highly diverse, with researchers holding advanced degrees in Computational Psychology, Electromechanical Systems, Mechanical Engineering, Electrical Engineering, Robotics, Software Engineering and Computer Science. While bringing different skill sets, they share extensive real-world experience in creating a new autonomy architecture that drives the future advancement of robotics and autonomous systems.
GE RESEARCH TECHNOLOGY PORTFOLIO
We’re developing a generalized three-layer autonomy stack using open-standards and modularity to support rapid development and robust sense-making.
A decisioning framework intended to manage Perception & Action to regulate the autonomy.
A collection of algorithms & pipelines optimized to deliver high speed robotic perception.
A collection of algorithms & libraries optimized to deliver perception driven intelligent behaviors.
The Embedded AI Computer
An AI supercomputer on a module that packs performance into a small, power-efficient form factor.
This new autonomy stack will change architecture paradigms by focusing on attention models and new cognitive approaches as an organizing principle. Rather than having a mission planner swap in sets of appropriate behaviors, the system uses context to decide what is important to the robot now and leverages reactive planning while still using traditional planning and behaviors to implement its guidance. Some examples include:
- Scale inspection technologies from single turbines to swarms of vehicles surveying an entire wind farm
- Safe operation in national airspaces through a combination of cooperative / noncooperative collision avoidance
- Adaptive autonomy able to respond to new situations in a robust fashion without requiring retraining or new development
- Improving quality of life by building intelligent companions that can help with daily activities
The primary focus of the autonomy engine is to enable our robotic systems to deal with variation and uncertainty. Once our autonomous system can effectively deal with variation and uncertainty (in both the mission and the environment), it will be possible for them to execute a wide variety of mission types communicated in higher level natural language by their users and companions.