This week we learned about lab-grown “mini tumors” than could help doctors pick the right treatments for cancer, microrobots inspired by jumping spiders, and an electronically controlled artificial eye that’s thinner than a strand of human hair and can tune out its flaws. This is all science, friends, no fiction!
What is it? Harvard researchers developed an “electronically controlled artificial eye” that “simultaneously controls for three of the major contributors to blurry images: focus, astigmatism, and image shift.” The design, which is just 30 microns thick — thinner than human hair, “combines breakthroughs in artificial muscle technology with metalens technology to create a tunable metalens that can change its focus in real time, just like the human eye,” said Alan She, a graduate student at Harvard’s Graduate School of Arts and Sciences, and first author of the team’s paper, which appeared in the journal Science Advances. “We go one step further to build the capability of dynamically correcting for aberrations such as astigmatism and image shift, which the human eye cannot naturally do.”
Why does it matter? The research could find “a wide range of applications, including cell phone cameras, eyeglasses, and virtual and augmented reality hardware,” according to Federico Capasso, a Harvard physics professor and senior author of the paper. “It also shows the possibility of future optical microscopes, which operate fully electronically and can correct many aberrations simultaneously” he said.
How does it work? The team scaled up a “metalens,” a device that combines nanotechnology and data to focus light and eliminate flaws. They also developed algorithms to process the data the metalens generated. Next, the team attached the lens to artificial muscle made from a transparent dielectric elastomer, and used electricity to flex the muscle and control the eye. The researchers demonstrated that “the lens can simultaneously focus, control aberrations caused by astigmatisms, and perform image shift,” The Harvard Gazette reported.
What is it? Scientists working at the research arm of the Chinese internet company Baidu trained a computer program to accurately clone the voice of a human speaker from a single 4-second voice recording. The AI can also convert a female voice into a male voice, and make words spoken with a British accent sound as if they were uttered by an American. You can listen to the samples here.
Why does it matter? Baidu started its Deep Voice project last year with the goal of “teaching machines to generate speech from text that sound more human-like.” The company said that “voice cloning is expected to have significant applications in the direction of personalization in human-machine interfaces.”
How does it work? In this latest study, the team focused on two areas of voice cloning: speaker adaption and speaker encoding. They applied “backpropagation-based optimization” to tackle the first problem. “The speaker encoding model has time-and-frequency-domain processing blocks to retrieve speaker identity information from each audio sample, and attention blocks to combine them in an optimal way,” Baidu wrote. “In terms of naturalness of the speech and similarity to the original speaker, both [techniques] demonstrate good performance, even with very few cloning audios.”
What is it? Students and faculty at Florida Polytechnic University are working on a “Happy Suit” for astronauts. Specifically, the team is developing wireless sensors that will “detect emotional and physical deficiencies in astronauts through wireless sensors that will then send an immediate response to improve the ‘atmosphere,’ and adjust the astronauts’ environment to fit their individual needs,” according to a press release. “The adjustments include changes in temperature, light exposure, light color, and oxygen levels.”
Why does it matter? NASA is working to send humans to Mars in the 2030s, and Space X is aiming for 2024. The trip takes about nine months, and the university reported that depression was “a major problem in space, as astronauts can be adversely affected by factors like insufficient exercise, excessive exposure to light and lack of sleep.”
How does it work? The school said that the project, which won a grant from NASA and also involves UCLA, is aiming for wireless sensors that will be part of astronaut clothing. In addition to actively adjusting the environment, the sensors could also monitor the travelers’ pulse rate, blood pressure and joint angles.
What is it? The age of useful robotic insects may be upon us. Engineers at the University of Manchester are taking clues from jumping spiders and bees to develop “bio inspired microrobots.” This is not the dystopian stuff you’ve seen on “Black Mirror.” “For our robotic spiders research, we are looking at a specific species of jumping spider called Phidippus regius,” said Mostafa Nabawy, the university’s microsystems research theme leader. “We have trained it to jump different distances and heights, recording the spider’s every movement in extreme detail through high resolution cameras which can be slowed down.”
Why does it matter? “Swarms of robot bees pollinating crops and flowers could become a reality,” Nabawy said, and spider robots “can be used for a variety of different purposes in complex engineering and manufacturing and can be deployed in unknown environments to execute different missions.” Nabawy continued: “We’re aiming to create the world’s first robot bee that can fly unaided and unaccompanied. These technologies can also be used for many different applications, including improving the current aerodynamic performances of aircraft.”
How does it work? The team is using biomechanical data obtained from high-resolution cameras to “model robots that can perform with the same abilities,” Nabawy said. “With this extensive dataset we have already started developing prototype robots that can mimic these biomechanical movements and jump several centimeters.”
What is it? Scientists at the Institute of Cancer Research and The Royal Marsden National Health System Foundation Trust are growing tiny replicas of patients’ tumors from their biopsy samples and using them to test cancer drugs.
Why does it matter? The approach could help doctors find the right drugs for individual patients. “Once a cancer has spread round the body and stopped responding to standard treatments, we face a race against time to find patients a drug that might slow the cancer’s progression and extend their lives,” said study leader Nicola Valeri. “We found that recreating patients’ tumors in the laboratory using this new technique gave us an extremely promising way to predict whether a drug would work for a patient.”
How does it work? The team biopsied samples from 71 cancer patients whose tumors had spread through the body and who were enrolled in early stage clinical trials. Next, they placed the cancer cells “inside a gel so they were free to form a 3D shape.” The team tested 55 “established or new drugs against the mini tumors and compared the results with how the patient had responded in the clinic,” ICR reported. “Testing mini tumors was 100 per cent accurate at identifying drugs that wouldn’t work in patients, and picked out drugs that would shrink a patient’s tumor 88 per cent of the time.” The researchers also reported that the method “seemed more effective at predicting drug response” than analyzing the DNA of the original tumors. “We were able to look in incredible detail at how tumors responded to drugs – including patterns of gene activity and mutation, and even how the cancer would evolve in response to treatment,” Valeri said. “We looked at tumors from patients with cancers of the digestive system, but the technique could be applied to a wide variety of cancer types.”