Revolutionizing Robot Navigation: Unlocking Energy Efficiency with Brain-Inspired Computing
How Do Robots ‘See’ the World?
Mimicking the Brain
A New Kind of LENS
A Robot in the Wild
The Future of Robot Navigation: Harnessing Brain-Inspired Computing for Energy Efficiency
Robots are increasingly woven into our day-to-day lives, performing tasks ranging from warehouse automation to assisting with household chores like vacuuming. However, for these machines to navigate effectively from one location to another, they must have a keen awareness of their position in the environment. The distance and duration a robot can operate is closely linked to its power consumption, particularly in navigation systems, which tend to be energy-intensive.
But what if power consumption was no longer a barrier? Our latest research, published in Science Robotics, explores “brain-inspired” computing techniques that promise to revolutionize how robots navigate, making them more energy-efficient than ever — opening up possibilities for battery-powered systems in challenging environments such as disaster zones, underwater settings, and even outer space.
How Do Robots “See” the World?
When your smartphone runs out of battery, it typically feels inconvenient. For robots, however, depleting their power supply can spell the difference between successful missions and catastrophic failures, especially in critical situations like search and rescue operations or deep-sea exploration.
Many of these robots rely on visual place recognition, a method that helps them determine their location based on what they "see" through onboard cameras. Unfortunately, this method is notoriously energy-intensive; robotic vision alone can consume about a third of the power from a lithium-ion battery. This is largely due to modern robotic vision systems, which rely on complex, power-hungry machine learning models akin to those used in advanced technologies like ChatGPT.
In contrast, the human brain operates using a power level similar to that of a light bulb, allowing us to perceive and navigate with extraordinary efficiency. Inspired by this biological model, our research seeks to develop more effective visual place recognition systems by mimicking certain aspects of the human brain.
Mimicking the Brain
Introducing neuromorphic computing — a technology designed to emulate the neural structure of the human brain. Neuromorphic computers are exceptionally energy-efficient, consuming up to 100 times less power than traditional computers.
This technology is not limited to computer chips; it can also be integrated with bio-inspired cameras known as dynamic vision sensors. Unlike conventional cameras that continuously stream data, dynamic vision sensors activate only when changes in the scene detect movement, dramatically reducing the energy consumed to less than 1% that of traditional cameras.
So why aren’t these advanced technologies more widely adopted in robotics? The main challenge lies in the unique properties of dynamic vision sensors, which don’t produce static images. Standard visual place recognition systems depend on fixed images, requiring innovative solutions to effectively harness neuromorphic technology.
A New Kind of LENS
To bridge this gap, we introduced a system called Locational Encoding with Neuromorphic Systems (LENS). Our research combines neuromorphic chips and dynamic vision sensors to create an architecture enabling robots to use visual place recognition efficiently. LENS employs spiking neural networks for processing, mimicking the way human brains process information.
The results are promising: by utilizing this brain-inspired framework, we’ve reduced the energy needed for visual place recognition by over 90%. Given that nearly a third of a robot’s energy is devoted to vision, this significant reduction could drastically enhance operational efficiency.
For testing, we outfitted a hexapod robot—capable of navigating diverse terrains—with the LENS system. This configuration demonstrated energy-efficient performance on par with traditional visual place recognition systems, utilizing a fraction of the energy.
The Road Ahead
Our work comes at a critical juncture as AI development trends toward larger, more energy-consuming models. Questions about the sustainability of current AI energy demands loom large, as seen with systems like ChatGPT that require substantial resources.
The future of robotic navigation lies in developing compact, energy-efficient AI driven by neuromorphic computing. While challenges remain, our research points the way toward a brighter future for robots that can go further and longer than ever before. By drawing on the power of biology, we can help ensure that our robotic companions remain both efficient and effective in a variety of challenging environments.
This is just the beginning; the application of brain-inspired computing could lead to transformative advances in the field of robotics, redefining what’s possible in navigation and beyond.