A vision-based algorithm for estimating tip interaction forces on a deflected Atomic Force Microscope (AFM) cantilever is described. Specifically, we propose that the algorithm can estimate forces acting on an Atomic Force Microscope (AFM) cantilever being used as a nanomanipulator inside a Scanning Electron Microscope (SEM). The vision based force sensor can provide force feedback in real-time, a feature absent in many SEMs. A methodology based on cantilever slope detection is used to estimate the forces acting on the cantilever tip.
As yet underdeveloped is the psychology of human learning as it pertains to manual control tasks in fully dynamic, multi-degree-of-freedom domains. While we currently possess the capacity to teach these tasks, we are unable to predict how well people will do in these domains or how rapidly they will learn.
Virtual fixtures, shared controllers and other haptic guidance schemes have been supplement with virtual motor tasks in order to improve performance and skill retention and to reduce training duration and user workload. In an error-reducing shared controller implemented in our lab, the performance of a manual task was influenced by participants’ ability to identify and then excite a virtual two-mass system at the natural frequency of the system.