Co-Learning of Task and Sensor Placement for Soft Robotics (Teaser)
Alexander Amini Alexander Amini
242K subscribers
7,554 views
0

 Published On Mar 21, 2021

Unlike rigid robots which operate with compact degrees of freedom, soft robots must reason about an infinite dimensional state space. Mapping this continuum state space presents significant challenges, especially when working with a finite set of discrete sensors. Reconstructing the robot’s state from these sparse inputs is challenging, especially since sensor location has a profound downstream impact on the richness of learned models for robotic tasks. In this work, we present a novel representation for co-learning sensor placement and complex tasks. Specifically, we present a neural architecture which processes on-board sensor information to learn a salient and sparse selection of placements for optimal task performance. We evaluate our model and learning
algorithm on six soft robot morphologies for various supervised learning tasks, including tactile sensing and proprioception. We also highlight applications to soft robot motion subspace visualization and control. Our method demonstrates superior performance in task learning to algorithmic and human baselines while also learning sensor placements and latent spaces that are semantically meaningful.

Authors: Andrew Spielberg*, Alexander Amini*, Lillian Chin, Wojciech Matusik, and Daniela Rus
Published in: IEEE Robotics and Automation Letters (RA-L), with presentation in RoboSoft 2021.

OpenAccess (free) paper link: https://ieeexplore.ieee.org/stamp/sta...
Full paper video:    • Co-Learning of Task and Sensor Placem...  

show more

Share/Embed