Visual Features as Frames of Reference in Task-Parametrised Learning from Demonstration

Visual Features as Frames of Reference in Task-Parametrised Learning from Demonstration

Title: Visual Features as Frames of Reference in Task-Parametrised Learning from Demonstration
Authors: Shirine El Zaatari (Institute of Advanced Manufacturing and Engineering, Coventry University); Weidong Li (Engineering, Environment and Computing Faculty, Coventry University);
Year: 2019
Citation: El Zaatari, S., Li, W., (2019). Visual Features as Frames of Reference in Task-Parametrised Learning from Demonstration. UK-RAS19 Conference: “Embedded Intelligence: Enabling & Supporting RAS Technologies” Proceedings, 94-97. doi: 10.31256/UKRAS19.25

Abstract:

Task-parametrised learning from demonstration (TP-LfD) is suitable for programming collaborative robots (cobots) for collaborative industrial tasks, since the algorithm is able to encode complex mappings between observed states to the cobot’s actions. TP-LfD relies heavily on perception, since detected objects and people serve as task parameters. This is a challenge since 1) industrial objects are difficult to detect due to their irregular shapes and sizes and 2) using marker stickers for detection is not desirable in manufacturing scenarios. Moreover, another challenge of using TP-LfD is that although it is an intuitive programming method, it is difficult for operators to initialise it due to their lack of underlying theoretical knowledge as opposed to the researchers that previously tested the algorithm. We aim to address these two challenges simultaneously by building an automatic task parametriser in which reinforcement learning is used to assign task parameters from a set of randomly detected visual features. In this paper, we introduce our solution and the progress done so far.

Download PDF