For reasons nobody will really know, Science Digest this week was filled with summaries on different elements in robotics. Most of these summaries (if not all) are from academia and not corporate RD shops. Regardless, some of these are very interesting – the alternative to LIDAR seems to be a quick commercialization and they are already working w/ Toyota.
University of Tokyo – Robotic AI learns to be spontaneous
Summary: Autonomous functions for robots, such as spontaneity, are highly sought after. Many control mechanisms for autonomous robots are inspired by the functions of animals, including humans. Roboticists often design robot behaviors using predefined modules and control methodologies, which makes them task-specific, limiting their flexibility. Researchers offer an alternative machine learning-based method for designing spontaneous behaviors by capitalizing on complex temporal patterns, like neural activities of animal brains. They hope to see their design implemented in robotic platforms to improve their autonomous capabilities.
Key quote: “Reservoir computing (RC) is a machine learning technique that builds on dynamical systems theory and provides the basis of the team’s approach. RC is used to control a type of neural network called a recurrent neural network (RNN). Unlike other machine learning approaches that tune all neural connections within a neural network, RC only tweaks some parameters while keeping all other connections of an RNN fixed, which makes it possible to train the system faster. When the researchers applied principles of RC to a chaotic RNN, it exhibited the kind of spontaneous behavioral patterns they were hoping for. For some time, this has proven a challenging task in the field of robotics and artificial intelligence. Furthermore, the training for the network takes place prior to execution and in a short amount of time.”
Cornell University – Stretchable ‘skin’ sensor gives robots human sensation
Summary: Cornell University researchers have created a fiber-optic sensor that combines low-cost LEDs and dyes, resulting in a stretchable ”skin” that detects deformations such as pressure, bending and strain. This sensor could give soft robotic systems – and anyone using augmented reality technology – the ability to feel the same rich, tactile sensations that mammals depend on to navigate the natural world.
Key Quote: “Right now, sensing is done mostly by vision,” Shepherd said. “We hardly ever measure touch in real life. This skin is a way to allow ourselves and machines to measure tactile interactions in a way that we now currently use the cameras in our phones. It’s using vision to measure touch. This is the most convenient and practical way to do it in a scalable way.”
Princeton University, Engineering School – Machine learning guarantees robots’ performance in unknown territory
Summary: As engineers increasingly turn to machine learning methods to develop adaptable robots, new work makes progress on safety and performance guarantees for robots operating in novel environments with diverse types of obstacles and constraints.
Key Quote: “In three new papers, the researchers adapted machine learning frameworks from other arenas to the field of robot locomotion and manipulation. They turned to generalization theory, which is typically used in contexts that map a single input onto a single output, such as automated image tagging. The new methods are among the first to apply generalization theory to the more complex task of making guarantees on robots’ performance in unfamiliar settings. While other approaches have provided such guarantees under more restrictive assumptions, the team’s methods offer more broadly applicable guarantees on performance in novel environments, said Majumdar.”
University of California – San Diego – Upgraded radar can enable self-driving cars to see clearly no matter the weather
Summary: A new kind of radar could make it possible for self-driving cars to navigate safely in bad weather. Electrical engineers developed a clever way to improve the imaging capability of existing radar sensors so that they accurately predict the shape and size of objects in the scene. The system worked well when tested at night and in foggy conditions.
Key Quote: “It’s a LiDAR-like radar,” said Dinesh Bharadia, a professor of electrical and computer engineering at the UC San Diego Jacobs School of Engineering. It’s an inexpensive approach to achieving bad weather perception in self-driving cars, he noted. “Fusing LiDAR and radar can also be done with our techniques, but radars are cheap. This way, we don’t need to use expensive LiDARs.” The system consists of two radar sensors placed on the hood and spaced an average car’s width apart (1.5 meters). Having two radar sensors arranged this way is key — they enable the system to see more space and detail than a single radar sensor.”