“When you’re flying a plane, you’re not actually flying a plane,” Mary Cummings, professor at Duke’s Pratt School of Engineering, says. “A computer’s flying the plane, and you’re trying to tell the computer how to fly the plane.”
Cummings should know. A former Naval fighter pilot, she’s now director of Duke University’s Humans and Autonomy Lab. Below, she discusses her work, “human supervisory control,” applied to everything from drones to driverless cars.
1. A lot has happened in the field of drones since you first joined the taskforce on U.S. Drone Policy more than three years ago. How would you describe the progress in drone-related policymaking and innovation today?
I think there’s been a lot of progress with policy. I have been clamoring, as many other people have, for commercial opportunities. Thanks to Part 107, or the rules for non-hobbyist small, unmanned aircraft, these commercial opportunities now exist. And that’s great, because it’s sparking new businesses. There’s certainly a huge need in agricultural surveillance and intervention, such as crop dusting. It could be completely turned over to drones. It’s areas like that where the commercialization of small drones has been very useful. Police are able to search for missing people or hikers lost in the middle of the wilderness. These technologies can make a real tangible difference.
2. What are the next big leaps in drone technology and rules?
I think the elephant in the room still is looking at large-scale aircraft. Because while it’s great to think about package delivery with drones, this is never going to be a major part of the aviation market. The economies of scale will only come when we allow full-scale aircraft to fly as drones, which they are capable of, for, say, FedEx, DHL and UPS. When you can turn those aircraft into drones, not only is that a game changer for those businesses, but that also reduces carbon footprint. When a computer flies the plane, it emits less carbon dioxide. The next big moment to truly happen in drones will be the development of an air-traffic control system that can manage them.
I think technically if we put our minds to it, it could be solved within one to two years. The real issue there is it’s another regulatory problem.
3. With the proliferation of small drones for sale, are there uses of drones you think shouldn’t be permitted or encouraged at this point?
I tell people: any mission that you see an airplane do, a drone can do. A lot of the ideas out there are neat, but simply not scalable. I doubt that the idea of Amazon Prime Air, or any other company delivering food by drone right to your doorstep, is really going to take off as a major business model. I do think more important things like emergency medicine delivery to rural America could be a real game changer.
I’m not saying you shouldn’t use drone food delivery. There are just so many problems with doing that, including bad weather and my 9-year-old with a broomstick who would totally take a drone out if she could. There’s a whole host of what I call socio-technical issues that have to be addressed.
4. What are surprising ways in which machines could make better decisions when it comes to people? For example, could police robots be helpful in routine traffic stops?
There has been this idea that maybe robots will be less biased than humans. But Google actually had an AI algorithm that was very biased. Then Microsoft’s Twitter Bot began modeling incredibly racist language after 16 hours.
There’s been an analogous situation to traffic stops that the military’s been working on for a long time – for checkpoints.
The problem is these algorithms and robots are only good as their programmers. They’re only as good as the data that they take in. It would be nice in a perfect world if we could have robots help us correct racial biases in police work and potentially even the military. But we’re still pretty far from that.
5. What are the most pressing questions that Duke’s Humans and Autonomy Lab is exploring?
We have two really big thrusts right now. One is designing dispatch centers for driverless cars, flying cars and automated trains of the future.
What are you going to do when your driverless car is stuck on the road in the middle of traffic without a steering wheel? You’re going to have to have somebody remotely supervising a fleets of cars. Think OnStar. If there’s a mechanical problem, they’ll send a rapid reaction team. There’s a whole new world emerging of what we call super-dispatchers.
Another area of our research is “explainable” AI: how to help humans of all backgrounds understand what AI algorithms are doing. Before we start making broad applications of AI, we’re going to have to understand how to explain what these algorithms are doing. I think that it’s a big area of research that’s just now emerging. It’s anybody’s guess what this field will look like in five years.
Mary “Missy” Cummings is currently a Professor in the Duke University Pratt School of Engineering, the Duke Institute of Brain Sciences, and is the director of the Humans and Autonomy Laboratory and Duke Robotics.
All views expressed are those of the author.
Top image: Germany’s Lilium is developing an electric flying vehicle that can vertically take off and land. Image credit: Lilium.