This week, researchers rigged off-the-shelf technology to catapult their ideas into the next level. Engineers in California used a movie projector to 3D-print super smooth, bendable objects, a team in Massachusetts analyzed data from drones to spot problems with solar panels early, and a group in New York used Jenga blocks to teach their uncanny robot foresight, spatial reasoning and fine motor skills. Science, it’s all about patience and passion: Plus: A robot develops a model of self, and diamonds from carbon fibers.
What is it? Scientists at the University of California, Berkeley, have created a “replicator” — named after the “Star Trek” device that materializes objects on demand. The device uses rays of light to shape 3D-printed objects. The results are “smoother, more flexible and more complex than what is possible with traditional 3D printers.”
Why does it matter? 3D-printed parts are typically created by fusing lines of metal powder layer by layer according to a computer design, which can lead to a “stair-step” effect along the object’s edges — basically, the final part is not totally smooth. It’s also hard to make truly flexible 3D-printed objects because they can warp during the printing process. The new method eliminates those problems and could be used for delicate products such as prosthetics and eyeglass lenses.
How does it work? Assistant professor of mechanical engineering Hayden Taylor and his team created a special 3D-printing resin made of liquid polymers with photosensitive molecules and dissolved oxygen. Light activates the photosensitive elements and depletes the oxygen, causing the material to change from liquid to solid. “Basically, you’ve got an off-the-shelf video projector, which I literally brought in from home, and then you plug it into a laptop and use it to project a series of computed images, while a motor turns a cylinder that has a 3D-printing resin in it,” Taylor said. That’s the layperson’s version, anyway. A fuller explanation, under the heading “Volumetric Additive Manufacturing via Tomographic Reconstruction,” is available in Science.
What is it? Sure, sheep are fine for helping with the upkeep of solar farms — but have you considered algorithmically-inclined, unmanned aerial vehicles? MIT spinoff Raptor Maps uses machine-learning-enabled drones to track problems with solar panels so they can be maintained as efficiently as possible.
Why does it matter? The solar industry is shining brightly, but as solar farms spread, they run into the problem of upkeep. Operators are already using drones to identify damaged cells, but mostly that just means technicians have to sift through mountains of data to find problems. Raptor Maps, by contrast, assigns sophisticated software to work through that data, which is then able to estimate the cost of repair and help technicians prioritize their tasks. “We can enable technicians to cover 10 times the territory and pinpoint the most optimal use of their skill set on any given day,” said co-founder and CEO Nikhil Vadhavkar.
How does it work? The venture was founded in 2015 by a small crew of MIT grads with the initial intention to focus on the agricultural industry. In 2017 they publicly released the machine-learning software they’d developed for farmers — but found that most people were using it, instead, for solar farms. According to MIT, “Raptor Maps has found success in the industry by releasing its standards for data collection and letting customers … use off-the-shelf hardware [i.e. drones] to gather the data themselves. After the data is submitted to the company, the system creates a detailed map of each solar farm and pinpoints any problems it finds.” Last year Raptor processed 4 gigawatts of data from six continents, representing enough energy to power 3 million homes.
What is it? Another MIT-associated robot, by contrast, is working closer to the ground: Engineers developed it to play the game of Jenga.
Why does it matter? Anybody who’s played Jenga knows that it requires foresight, spatial reasoning and fine motor skills to keep the whole thing from crashing down — a combination of abilities robots could use on, say, factory lines assembling complex products like cellphones. “Unlike in more purely cognitive tasks or games such as chess or Go, playing the game of Jenga also requires mastery of physical skills such as probing, pushing, pulling, placing and aligning pieces,” said MIT assistant mechanical engineering professor Alberto Rodriguez. “It requires interactive perception and manipulation, where you have to go and touch the tower to learn how and when to move blocks.”
How does it work? The robot, described at length in Science Robotics, comes equipped with a soft-pronged gripper, a force-sensing wrist cuff and a camera. Rather than simply being programmed to play Jenga, the machine is programmed to learn as it goes — analyzing the outcomes of certain moves in order to make smarter choices in the future. It still has a bit of learning to do, though, before it’s able to best a skilled human competitor.
What is it? Still, Jenga is child’s play compared to another new development from Columbia University that brings the field of robotics closer to a long-held goal: machine self-awareness. Researchers there have designed a robot that can create a “self-simulation,” similar to how humans are able to maintain a self-image.
Why does it matter? To the extent that scientists can program robots to develop a consciousness, it may help us understand the origins of our own consciousness — one of the great mysteries of humanity. In the more immediate term, a robot with a conception of its own body might, for instance, be able to autonomously identify defects and make repairs, meaning it’ll be better equipped to make it through the inevitable “Matrix”-style war with humanity we’ve been primed for by Hollywood. Mechanical engineering professor Hod Lipson, who directs Columbia’s Creative Machines Lab, said, “If we want robots to become independent, to adapt quickly to scenarios unforeseen by their creators, then it’s essential that they learn to simulate themselves.”
How does it work? Deep learning or practice, practice, practice. The robot at first moved about randomly but eventually was able to collect enough data about its movements that, with the help of the engineers’ algorithms, it was able to develop a rough self-conception. “After less than 35 hours of training, the self-model became consistent with the physical robot to within about four centimeters,” Columbia reports; it was able to use this self-knowledge to pick up small spheres and move them into a glass receptacle, and to identify and get accustomed to a defective part researchers had placed on it. The project is further described in Science Robotics.
What is it? Scientists at North Carolina State University have formulated a room-temperature method for turning carbon fibers into diamond fibers.
Why does it matter? Diamonds aren’t just bling nonpareil — they’re also useful in optical devices and as a coating for cutting tools in construction, undersea drilling and other industries. The natural ones also are rare, expensive and forged deep in the earth under conditions of extreme heat and pressure, and their extraction dogged by human-rights violations. Those conditions of extreme heat and pressure are difficult to replicate in the lab, though many have tried — including GE in the 1950s.
How does it work? Researchers devised a twofold method: First, they melt the carbon fibers by heating them via laser pulses to temps of 4,000 degrees Kelvin — a process that takes only 100 nanoseconds. Then, they rapidly cool the material on a substrate made of sapphire, glass or plastic polymer. The “undercooling,” as it’s called, prevents the carbon from changing from a solid state into a gas, which it typically does when heated. Materials science and engineering professor Jagdish Narayan, lead author of a new paper in Nanoscale, said, “Without undercooling, you cannot convert carbon into diamond this way.”