|Title||Learning Robot Objectives from Physical Human Interaction|
|Publication Type||Conference Proceedings|
|Year of Conference||2017|
|Authors||Bajcsy, A, Losey, DP, O'Malley, MK, Dragan, AD|
|Conference Name||Conference on Robot Learning (CoRL)|
|Conference Location||Mountain View, CA|
|Keywords||learning from demonstration; physical human-robot interaction|
When humans and robots work in close proximity, physical interaction is inevitable. Traditionally, robots treat physical interaction as a disturbance, and resume their original behavior after the interaction ends. In contrast, we argue that physical human interaction is informative: it is useful information about how the robot should be doing its task. We formalize learning from such interactions as a dynamical system in which the task objective has parameters that are part of the hidden state, and physical human interactions are observations about these parameters. We derive an online approximation of the robot’s optimal policy in this system, and test it in a user study. The results suggest that learning from physical interaction leads to better robot task performance with less human effort.