Unsupervised Online Learning for Robotic Interestingness with Visual Memory

Victoria D. Doty

Autonomous robots must recognize interesting scenes to make further decisions. A recent paper on arXiv.org proposes to establish an online learning scheme to search for interesting sites for robotic exploration.

Image credit: SEAT

A three-stage learning architecture is introduced. Firstly, a model is trained offline with big data in an unsupervised manner to acquire human-like experience. Then, short-term learning is used for quick robot deployment and acquiring task-related knowledge.

Finally, online learning enables environmental adaptation and real-time response. In order to accelerate the short-term and online learning, a novel 4-D visual memory is introduced, replacing previously used expensive back-propagation algorithm. It is demonstrated that the suggested approach achieves 20% higher performance than the state-of-the-art algorithms.

Autonomous robots frequently need to detect “interesting” scenes to decide on further exploration, or to decide which data to share for cooperation. These scenarios often require fast deployment with little or no training data. Prior work considers “interestingness” based on data from the same distribution. Instead, we propose to develop a method that automatically adapts online to the environment to report interesting scenes quickly. To address this problem, we develop a novel translation-invariant visual memory and design a three-stage architecture for long-term, short-term, and online learning, which enables the system to learn human-like experience, environmental knowledge, and online adaption, respectively. With this system, we achieve an average of 20% higher accuracy than the state-of-the-art unsupervised methods in a subterranean tunnel environment. We show comparable performance to supervised methods for robot exploration scenarios showing the efficacy of our approach. We expect that the presented method will play an important role in the robotic interestingness recognition exploration tasks.

Research paper: Wang, C., Qiu, Y., Wang, W., Hu, Y., Kim, S., and Scherer, S., “Unsupervised Online Learning for Robotic Interestingness with Visual Memory”, 2021. Link: https://arxiv.org/abs/2111.09793


Next Post

Docking-based Virtual Screening with Multi-Task Learning

Creating new drugs is a complex and expensive process. Computer-aided drug discovery is used to accelerate this process and lower the market cost. For instance, molecular docking is employed to predict the binding affinity and bound conformation. However, with the growing amount of data, the computational time of virtual screening […]

Subscribe US Now