Physicists in Germany designed an optical mirror that’s a thousand times thinner than a human hair but emits a powerful reflection, scientists in Washington created a tiny wireless camera system that can be attached to the backs of insects, and a Russian team came up with a way to make it easier to use nanoparticle therapies to treat disease. In this week’s coolest scientific advances, big ideas come in small, small, small packages.

What is it? At Germany’s Max Planck Institute of Quantum Optics, physicists designed the “lightest optical mirror imaginable.” At “only several tens of nanometers thin,” it’s a thousand times thinner than a human hair, yet emits a reflection so strong that it can be seen by the eye.
Why does it matter? The mirror could be an important tool for physicists studying quantum theories related to light-matter interaction, the engineering of quantum devices and more. David Wei, co-author of a new paper in Nature, said, “Many new exciting opportunities have been opened, such as an intriguing approach to study quantum optomechanics, which is a growing field of studying the quantum nature of light with mechanical devices.”
How does it work? The mirror consists of only a few hundred identical atoms, arranged in the “two-dimensional array of an optical lattice formed by interfering laser beams,” according to the institute. The pattern and the way that the atoms are spaced act to “suppress a diffuse scattering of light,” while incoming photons bounce around between the atoms more than once before being reflected: “Both effects, the suppressed scattering of light and the bouncing of the photons, lead to an ‘enhanced cooperative response to the external field,’ which means in this case: a very strong reflection.”

What is it? Researchers at the University of Washington designed a super-lightweight, steerable, wireless camera system that can be attached to the backs of beetles and other insects.
Why does it matter? Beetles haven’t learned to enjoy YouTube yet, so admittedly they might not really get much out of the videos they film while scurrying around — but insects aren’t the prime beneficiaries of this kind of technology. Roboticists are: They’re taking cues from how insects evolved in order to develop better, smaller cameras. Vision, for instance, requires a lot of energy expenditure in those tiny bodies, so some flies have just a small high-resolution region in their compound eyes, compensating by moving their heads when they need to get a better look at something. As the researchers write in a new paper in Science Robotics, “By understanding the trade-offs made by insect vision systems in nature, we can design better vision systems for insect-scale robotics in a way that balances energy, computation, and mass.”
How does it work? The tiny battery-powered camera system, which weighs about 250 milligrams — a small fraction of the weight of a playing card — streams video to a smartphone at between 1 and 5 frames a second, and sits on a mechanical arm that can rotate 60 degrees. As in some insects, that allows viewers controlling the system to obtain high-res or panoramic pics without using a lot of energy.

What is it? In Russia, a team of researchers from several institutions figured out how to “resolve a key problem that has prevented the introduction of novel drugs into clinical practice for decades,” in a finding that could have important implications for nanomedicine.
Why does it matter? As one of the institutions involved, the Moscow Institute of Physics and Technology, explains in a news release, medical research has turned in past decades toward the promise of nanoparticles — which, compared to therapeutic molecules that perform only one function, can activate a number of complex reactions against diseases such as cancer. The field has been held back by a specific problem: The immune system is so efficient at eliminating “nanosized foreign entities” that nanoparticles are often cleared from the bloodstream in minutes or seconds — before they’ve had a chance to do their work.
How does it work? The Russian scientists came up with a way to prolong the circulation of nanoparticles in the bloodstream by, essentially, distracting the immune system. Already the immune system is constantly in the process of clearing out old red blood cells; the scientists thought that if they “slightly intensified” this process, they could extend the circulation of the desired nanoparticles. Maxim Nikitin, lead author of a new paper in Nature Biomedical Engineering, said, “While it becomes busy clearing red blood cells, less attention is given to the clearance of the therapeutic nanoparticles. Importantly, we wanted to distract the immune system in the most gentle way, ideally via the body's innate mechanisms rather than by artificial substances.”

What is it? A new technique developed by researchers at the University of Dayton will make it less expensive to do 3D printing on a nanoscale — that’s a thousand times smaller than a human hair — and also easier to correct mistakes in the printing process.
Why does it matter? 3D printing is popular with industrial designers and engineers because it offers the ability to rapidly turn out prototypes, and 3D-printed nanostructures are of interest to, among others, the designers of medical devices. But 3D nanoprinting has previously been difficult and costly. Chenglong Zhao, a professor of physics and electro-optics at Dayton and a co-author of a new paper in Nano Letters, said, “This nano-version 3D-printing technology fills this gap by providing engineers an affordable manufacturing tool for the fabrication of nanostructures and devices, which has become increasingly important for applications that are enabled by nanotechnology.”
How does it work? Unlike other nanofabrication technologies, the new technique doesn’t need to be conducted in a vacuum, and it relies on commonly available lasers, such as the laser pointers used for presentations. It also lets users correct errors that come up in the manufacturing process, said Qiwen Zhan, a Dayton electro-optics professor: “Manufacturing error correction is extremely important to reduce manufacturing cost and increase the success rate of a production line. For example, before, if a tiny manufacturing error is found in an electronic chip, the entire chip has to be discarded. This technology will enable us to correct manufacturing errors even after manufacturing.”
Top and above images credit: Getty Images.
What is it? We won’t truly get in-home robot helpers until the machines can better learn how to perceive the physical environments they’re moving around in — and that’s no small feat. But now Massachusetts Institute of Technology roboticists have developed “a representation of spatial perception for robots that is modeled after the way humans perceive and navigate the world.” They call the model 3D Dynamic Scene Graphs.
Why does it matter? Machines with spatial perception could work on factory floors, for instance, as well as in search and rescue situations. “We are essentially enabling robots to have mental models similar to the ones humans use,” said Luca Carlone, an MIT professor of aeronautics and astronautics. “This can impact many applications, including self-driving cars, search and rescue, collaborative manufacturing, and domestic robotics.” It could also become part of virtual and augmented reality technologies.
How does it work? The 3D Dynamic Scene Graphs model, according to MIT News, “enables a robot to quickly generate a 3D map of its surroundings that also includes objects and their semantic labels (a chair versus a table, for instance), as well as people, rooms, walls and other structures that the robot is likely seeing.” Having that information compressed into a 3D map helps the robot quickly make choices about what direction to go, similar to how humans make those decisions, Carlone said: “If you need to plan a path from your home to MIT, you don’t plan every single position you need to take. You just think at the level of streets and landmarks, which helps you plan your route faster.”