TY - JOUR T1 - The Task-Dependent Efficacy of Shared-Control Haptic Guidance Paradigms JF - {IEEE} Transactions on Haptics Y1 - 2012 A1 - Powell, Dane A1 - Marcia K. O'Malley AB -

Shared-control haptic guidance is a common form of robot-mediated training used to teach novice subjects to perform dynamic tasks. Shared-control guidance is distinct from more traditional guidance controllers, such as virtual fixtures, in that it provides novices with real-time visual and haptic feedback from a real or virtual expert. Previous studies have shown varying levels of training efficacy using shared-control guidance paradigms; it is hypothesized that these mixed results are due to interactions between specific guidance implementations ( {amp;\#x201C;paradigms} {amp;\#x201D;)} and tasks. This work proposes a novel guidance paradigm taxonomy intended to help classify and compare the multitude of implementations in the literature, as well as a revised proxy rendering model to allow for the implementation of more complex guidance paradigms. The efficacies of four common paradigms are compared in a controlled study with 50 healthy subjects and two dynamic tasks. The results show that guidance paradigms must be matched to a task's dynamic characteristics to elicit effective training and low workload. Based on these results, we provide suggestions for the future development of improved haptic guidance paradigms.

VL - 5 ER - TY - Generic T1 - Efficacy of Shared-Control Guidance Paradigms for Robot-Mediated Training T2 - IEEE World Haptics Conference Y1 - 2011 A1 - Powell, Dane A1 - O'Malley, M.K. JF - IEEE World Haptics Conference CY - Istanbul, Turkey ER - TY - Generic T1 - Co-presentation of Force Cues for Skill Transfer via Shared-control Systems T2 - 16th Symposium on Haptic Interfaces for Virtual Environment and Teleoperator Systems (HAPTICS) Y1 - 2010 A1 - Powell, Dane A1 - O'Malley, M.K. AB -

During training and rehabilitation with haptic devices, it is often necessary to simultaneously present force cues arising from different haptic models (such as guidance cues and environmental forces). Multiple force cues are typically summed to produce a single output force, which conveys only relative information about the original force cues and may not be useful to trainees. Two force copresentation paradigms are proposed as potential solutions to this problem: temporal separation of force cues, where one type of force is overlaid with the other in staggered pulses, and spatial separation, where the forces are presented via multiple haptic devices. A generalized model for separating task and guidance forces in a virtual environment is also proposed. In a pilot study where sixteen participants were trained in a dynamic target-hitting task using these co-presentation paradigms, simple summation was in fact most effective at eliciting skill transfer in most respects. Spatial separation imposed the lowest overall workload on participants, however, and might thus be more appropriate than summation in tasks with other significant physical or mental demands. Temporal separation was relatively inferior at eliciting skill transfer, but it is hypothesized that this paradigm would have performed considerably better in a non-rhythmic task, and the need for further research is indicated.

JF - 16th Symposium on Haptic Interfaces for Virtual Environment and Teleoperator Systems (HAPTICS) ER - TY - THES T1 - Implementation and Analysis of Shared-Control Guidance Paradigms for Improved Robot-Mediated Training Y1 - 2010 A1 - Powell, Dane PB - Rice University CY - Houston VL - Master of Science ER - TY - Generic T1 - Impact of visual error augmentation methods on task performance and motor adaptation T2 - IEEE 11th International Conference on Rehabilitation Robotics (ICORR 2009) Y1 - 2009 A1 - Ozkan Celik A1 - Powell, Dane A1 - O'Malley, M.K. AB -

We hypothesized that augmenting the visual error feedback provided to subjects training in a point-to-point reaching task under visual distortion would improve the amount and speed of adaptation. Previous studies showing that human learning is error-driven and that visual error augmentation can improve the rate at which subjects decrease their trajectory error in such a task provided the motivation for our study. In a controlled experiment, subjects were required to perform point-to- point reaching movements in the presence of a rotational visual distortion. The amount and speed of their adaptation to this distortion were calculated based on two performance measures: trajectory error and hit time. We tested how three methods of error augmentation (error amplification, traditional error offsetting, and progressive error offsetting) affected the amount and speed of adaptation, and additionally propose definitions for “amount” and “speed” of adaptation in an absolute sense that are more practical than definitions used in previous studies. It is concluded that traditional error offsetting promotes the fastest learning, while error amplification promotes the most complete learning. Progressive error offsetting, a novel method, resulted in slower training than the control group, but we hypothesize that it could be improved with further tuning and indicate a need for further study of this method. These results have implications for improvement in motor skill learning across many fields, including rehabilitation after stroke, surgical training, and teleoperation.

JF - IEEE 11th International Conference on Rehabilitation Robotics (ICORR 2009) ER - TY - Generic T1 - Implementing Haptic Feedback Environments from High-level Descriptions Y1 - 2009 A1 - Angela Yun Zhu A1 - Jun Inoue A1 - Marisa Peralta A1 - Walid Taha A1 - O'Malley, M.K. A1 - Powell, Dane ER -