As artificial intelligence technology becomes ever more advanced, bots are walking a fine line: This week brings news of AI helping wheelchair users control their movements with simple facial gestures, on the one hand — but the possibility of robots hallucinating, on the other, to less than positive results. Meanwhile, the president of Microsoft is calling for stricter regulation of AI involved in facial recognition. If it’s nuance you seek, you’ll find plenty of ifs, ands and bots in this installment of 5 Coolest Things.
What is it? At a recent conclave between the governments of India and the United Arab Emirates, a UAE rep floated an idea for a two-hour transportation link between the two countries: an underwater bullet train suspended in a tunnel just beneath the surface of the Arabian Sea.
Why does it matter? It’s not just passengers who could take advantage of such a link, a 2,000-kilometer rail system between the Indian capital of Mumbai and Fujairah, UAE. It could be used to ship cargo between the countries, and the concept video also shows oil and freshwater pipelines attached to the train tube. The idea is still in its earliest stages; a feasibility study comes next.
How does it work? In the demonstration video, the train speeds through an underwater tunnel suspended by a series of rigs floating on the surface of the water. It looks trippy but, the website Futurism points out, China has previously explored underwater bullet trains, and recently approved one project that involves a 16-kilometer underwater stretch of rail. Still, the bold concept will require a lot of extreme engineering to become more than just a pipe dream.
What is it? Scientists at the University of Queensland’s Australian Institute for Bioengineering and Nanotechnology, or AIBN, have developed a simple blood test that accurately identifies signs of cancer in the body. The team described its findings in Nature Communications.
Why does it matter? Though the concept is still in development, the ability to detect cancer via blood test — any kind, anywhere — would be very big news indeed. The procedure relies on emerging understandings of how cancer shows up in the body’s DNA. “Because cancer is an extremely complicated and variable disease, it has been difficult to find a simple signature common to all cancers, yet distinct from healthy cells,” said Abu Sina, one of the researchers on the AIBN project.
How does it work? Like all cells, explains AINB, cancer cells are in a continuous process of dying and renewing, and when they die, they “essentially explode,” leaving traces of their DNA in circulation in the body, like dandelion seeds in the breeze. Sina and colleagues found that cancer DNA and healthy DNA stick to metal surfaces — specifically, gold — in markedly different ways. In their test, then, DNA is added to water containing gold nanoparticles; the water changes color instantly depending on the presence or absence of cancerous cells. Researcher Matt Trau said, “This happens in one drop of fluid. You can detect it by eye, it’s as simple as that.” On about 200 samples so far, the test has achieved an accuracy rate of about 90 percent. But in clinical applications, it won’t tell doctors what kind of cancer they’re dealing with, how far along it is or where in the body it’s located.
What is it? At MIT, computer scientists are chewing over an interesting but potentially alarming question: What happens when artificial intelligence “hallucinates”?
Why does it matter? The example offered in a recent BBC article is a self-driving car looking at a stop sign. Seeing anything but a stop sign in this situation is not going to lead to a favorable outcome. MIT computer scientist Anish Athalye and his colleagues have demonstrated situations in which “noise,” in the form of slight tweaks to texture or color, can be introduced into an image of a cat, for instance. The alteration might escape the human eye, but it could still lead a neural network to look at the cat and see a bowl of guacamole. Similarly, the placement of a sticker on a stop sign could trick an AI system into misreading it or disregarding it altogether. Scientists call this kind of AI misstep an adversarial example. Athalye said, “At first this started off as a curiosity. Now, however, people are looking at it as a potential security issue as these systems are increasingly being deployed in the real world.”
How does it work? To a degree, neural networks learn similarly to how young children do: by processing enough visual images that they’re able to identify patterns and, eventually, certain objects in them — like cats. But the process by which neural networks do this still isn’t completely understood by scientists, and these adversarial examples are doing a good job of illustrating just how far these AI systems lag behind the awesome power of the human brain. More study is needed. Says Athalye of neural networks, “We don’t currently understand them well enough to, for example, explain exactly why the phenomenon of adversarial examples exists and know how to fix it.”
What is it? “It’s time for action” on facial recognition technology, said Microsoft President Brad Smith in a blog post this week. He’s calling on governments around the world to start adopting legislation that will regulate the uses and limits on this one aspect of our brave new world.
Why does it matter? “The facial recognition genie, so to speak, is just emerging from the bottle,” Smith wrote. “Unless we act, we risk waking up five years from now to find that facial recognition services have spread in ways that exacerbate societal issues.” There’s both promise and danger in facial recognition, Smith notes. In the former category, police in New Delhi recently were able to use it to track down nearly 3,000 missing children in four days. But its design can also reflect human bias, such as when programs return higher error rates when analyzing the features of women and people of color. And facial recognition raises serious questions about privacy and surveillance.
How does it work? Smith calls for laws that require transparency for tech companies, including forthrightness about the technology’s “capabilities and limitations”; third-party testing to ensure facial recognition technology is accurate and bias-free; and privacy requirements so people can understand and give their consent when entering physical or online spaces where facial recognition is used. He also calls for limits on law enforcement agencies’ use of such technology.
What is it? Intel and Sao Paulo-based Hoobox Robotics have teamed up to develop artificial intelligence technology, called Wheelie 7, that lets people with disabilities control their wheelchairs with facial gestures.
Why does it matter? The current prototype, according to an article by Edward C. Baig, is now being tested by around 60 people in the United States who have conditions like quadriplegia and ALS. Most motorized wheelchairs are operated by joystick or via a complicated series of sensors placed on the body; the Wheelie technology, by contrast, is noninvasive and requires no special training. The “7” in its name refers to the number of minutes it takes to install the kit on any given wheelchair.
How does it work? An Intel camera mounted on the chair creates a 3D map of the user’s face. Caregivers or family members, meanwhile, can program the machine to “assign” which facial expressions should trigger which movements of the chair — forward, backward, left, right. “In tests so far,” Baig writes. “the smile expression is often used to stop the wheelchair rather than make it go one way or another. Why? People might smile because they’ve heard a joke or react to seeing a loved one, and you wouldn’t want the wheelchair to move just because of that.”