%0 Journal Article %J Device %D 2023 %T Fluidically programmed wearable haptic textiles %A Barclay Jumet %A Zane A. Zook %A Anas Yousaf %A Anoop Rajappan %A Doris Xu %A Te Faye Yap %A Nathaniel Fino %A Zhen Liu %A Marcia K. O’Malley %A Daniel J. Preston %K analog control %K fluidic control %K haptic sleeve %K human-machine interaction %K human-robot interaction %K Navigation %K point force %K smart textiles %K spatiotemporal haptics %K tactile cues %X

Summary Haptic feedback offers a useful mode of communication in visually or auditorily noisy environments. The adoption of haptic devices in our everyday lives, however, remains limited, motivating research on haptic wearables constructed from materials that enable comfortable and lightweight form factors. Textiles, a material class fitting these needs and already ubiquitous in clothing, have begun to be used in haptics, but reliance on arrays of electromechanical controllers detracts from the benefits that textiles offer. Here, we mitigate the requirement for bulky hardware by developing a class of wearable haptic textiles capable of delivering high-resolution information on the basis of embedded fluidic programming. The designs of these haptic textiles enable tailorable amplitudinal, spatial, and temporal control. Combining these capabilities, we demonstrate wearables that deliver spatiotemporal cues in four directions with an average user accuracy of 87%. Subsequent demonstrations of washability, repairability, and utility for navigational tasks exemplify the capabilities of our approach.

%B Device %P 100059 %G eng %U https://www.sciencedirect.com/science/article/pii/S2666998623000832 %R https://doi.org/10.1016/j.device.2023.100059 %> https://mahilab.rice.edu/sites/default/files/publications/DeviceJumet2023.pdf %0 Journal Article %J The International Journal of Robotics Research %D 2022 %T Physical interaction as communication: Learning robot objectives online from human corrections %A Dylan P. Losey %A Andrea Bajcsy %A Marcia K. O’Malley %A Anca D. Dragan %X

When a robot performs a task next to a human, physical interaction is inevitable: the human might push, pull, twist, or guide the robot. The state of the art treats these interactions as disturbances that the robot should reject or avoid. At best, these robots respond safely while the human interacts; but after the human lets go, these robots simply return to their original behavior. We recognize that physical human–robot interaction (pHRI) is often intentional: the human intervenes on purpose because the robot is not doing the task correctly. In this article, we argue that when pHRI is intentional it is also informative: the robot can leverage interactions to learn how it should complete the rest of its current task even after the person lets go. We formalize pHRI as a dynamical system, where the human has in mind an objective function they want the robot to optimize, but the robot does not get direct access to the parameters of this objective: they are internal to the human. Within our proposed framework human interactions become observations about the true objective. We introduce approximations to learn from and respond to pHRI in real-time. We recognize that not all human corrections are perfect: often users interact with the robot noisily, and so we improve the efficiency of robot learning from pHRI by reducing unintended learning. Finally, we conduct simulations and user studies on a robotic manipulator to compare our proposed approach with the state of the art. Our results indicate that learning from pHRI leads to better task performance and improved human satisfaction.

%B The International Journal of Robotics Research %V 41 %P 02783649211050958 %8 Jan 2022 %G eng %U https://doi.org/10.1177/02783649211050958 %& 20-44 %R 10.1177/02783649211050958 %> https://mahilab.rice.edu/sites/default/files/publications/Losey_IJRR2021.pdf %0 Conference Proceedings %B Human Factors and Ergonomics Society Annual Meeting %D 2017 %T Toward training surgeons with motion-based feedback: Initial validation of smoothness as a measure of motor learning %A Shivam Pandey %A Michael D. Byrne %A William H. Jantscher %A Marcia K. O’Malley %A Priyanshu Agarwal %X

Surgery is a challenging domain for motor skill acquisition. A critical contributing factor in this difficulty is that feedback is often delayed from performance and qualitative in nature. Collection of highdensity motion information may offer a solution. Metrics derived from this motion capture, in particular indices of movement smoothness, have been shown to correlate with task outcomes in multiple domains, including endovascular surgery. The open question is whether providing feedback based on these metrics can be used to accelerate learning. In pursuit of that goal, we examined the relationship between a motion metric that is computationally simple to compute—spectral arc length—and performance on a simple but challenging motor task, mirror tracing. We were able to replicate previous results showing that movement smoothness measures are linked to overall performance, and now have performance thresholds to use in subsequent work on using these metrics for training.

%B Human Factors and Ergonomics Society Annual Meeting %V 61 %P 1531-1535 %G eng %U https://doi.org/10.1177/1541931213601747 %R 10.1177/1541931213601747 %> https://mahilab.rice.edu/sites/default/files/publications/pandey2017hfes.pdf %0 Journal Article %J NeuroRehabilitation %D 2016 %T Transcranial direct current stimulation (tDCS) of the primary motor cortex and robot-assisted arm training in chronic incomplete cervical spinal cord injury: A proof of concept sham-randomized clinical study %A Nuray Yozbatirana %A Zafer Keser %A Matthew Davis %A Argyrios Stampas %A Marcia K. O’Malley %A Catherine Cooper-Hay %A Joel Fronteraa %A Felipe Fregni %A Gerard E. Francisco %B NeuroRehabilitation %V 39 %P 401–411 %G eng %> https://mahilab.rice.edu/sites/default/files/publications/TDCS_2016_Neurorehab.pdf %0 Conference Proceedings %B IEEE EMBS Conference on Neural Engineering %D 2013 %T A Pre-Clinical Framework for Neural Control of a Therapeutic Upper-Limb Exoskeleton %A Amy Blank %A Marcia K. O’Malley %A Gerard E. Francisco %A Jose L. Contreras-Vidal %B IEEE EMBS Conference on Neural Engineering %P 1159-1162 %8 2013 %G eng %> https://mahilab.rice.edu/sites/default/files/publications/BMI-EXO_2013_NER.pdf