The clinical standard of care for upper-limb myoelectric prostheses uses surface electrodes placed over muscles on the user’s residual limb to record electrical potentials (known as electromyograms or EMG) generated during muscle contraction. The EMG signals from these electrodes, when mapped appropriately to a motorized joint on a prosthesis, can serve as inputs to a control interface to open and close a hand with as little as one electrode.1,2 However, the simplicity and robustness of these interfaces comes at the expense of the ability for prosthesis users to rapidly switch between different grasp patterns, as well as to perform more complex and individuated movements. Pattern recognition systems that use machine learning models to map signal features from multiple EMG electrodes to a set of grasp patterns have allowed for users to switch between a larger number of functional grasps faster and more intuitively.3,4 These systems, however, have largely avoided clinical translation due to a need for frequent retraining to account for changes in EMG signal properties during prolonged use, as well as from an inability to predict motions not included during model training.5,6,7
The available myoelectric control interfaces provide prosthesis users with an ability to select between several functional hand postures required for activities of daily living (ADLs), such as cylindrical grips for holding bottles and pinch grips for holding keys, among others. These postures are selected by the user one after the other and are ultimately executed in a binary open-close fashion, which relegates these systems to the functional realm of grasping. Dexterous motions, characterized by simultaneous and independent control of multiple joints, and an ability to correct motions in real-time remain outside of the current capabilities of clinical myoelectric interfaces.
Our approach is to borrow from computational modeling techniques from various fields ranging from human motor control to motor unit physiology and musculoskeletal biomechanics,8,9,10 to develop a myoelectric interface capable of achieving simultaneous and individuated control of fingers in a prosthetic hand.
Using musculoskeletal simulations of a user’s amputated limb is a recent alternative to conventional control interfaces that may serve as a powerful basis to develop such an interface. However, given their mathematical complexity, these simulations generally occur offline, although recent work has introduced musculoskeletal models into real-time control paradigms.12,13,14,15 Towards these efforts, our initial work is focused on developing novel EMG signal processing techniques that can generate a proportional control signal of minimal-latency, in contrast to traditional EMG enveloping which introduces substantial time delays while calculating moving average or RMS values. Our current work focuses on evaluating a thresholded digital representation of the surface EMG signal, known as Myopulse Modulation,16,17 and its potential as an input signal for a control interface based on a musculoskeletal model.