Control Shift: New Reality Labs Research on sEMG Published in ‘Nature’
Punchcards. Keyboards. The mouse. Touchscreens. For multiple generations, we’ve been adapting to new ways of interacting with computers in order to communicate, create, and get things done. But what if there were a way for our devices to adapt to us, driven by machine learning and AI, with a control scheme that was less robotic, more intuitive, and inherently more human?
We’ve seen advancements in this area thanks to computer vision and natural language understanding, which lets us interact using our voice and allows computers to see the world like we do, but what if we could control our computers using the subtle movements of our hands? After all, using our hands is one of the first ways we go about interacting with the world around us. What if a new way to engage with machines were literally at our fingertips?
It’s a future we’ve been exploring at Reality Labs for years. Based on our findings, we believe that surface electromyography (sEMG) at the wrist is the key to unlocking the next paradigm shift in human-computer interaction (HCI).
And it’s an idea that’s catching on. Check out the latest issue of Nature, one of the world’s leading multidisciplinary science journals, for our latest peer-reviewed article that outlines our work in the field and validates the use of sEMG as an intuitive and seamless input that works across most people. First published in 1869, Nature has been home to the likes of Charles Darwin, Jennifer A. Doudna, Sir James Chadwick, Rosalind E. Franklin, and our own Yann LeCun.

Not only does sEMG enable intuitive and seamless on-the-go interaction with our devices, we’ve also supported the work of external research labs, which has shown that this technology is inherently inclusive because it works for people with diverse physical abilities and characteristics.
We successfully prototyped an sEMG wristband with Orion, our first pair of true augmented reality (AR) glasses, but that was just the beginning. Our teams have developed advanced machine learning models that are able to transform neural signals controlling muscles at the wrist into commands that drive people’s interactions with the glasses, eliminating the need for traditional—and more cumbersome—forms of input. You can type and send messages without a keyboard, navigate a menu without a mouse, and see the world around you as you engage with digital content without having to look down at your phone.

sEMG recognizes your intent to perform a variety of gestures, like tapping, swiping, and pinching—all with your hand resting comfortably at your side. And thanks to our handwriting recognition technology, you can quickly jot down messages on a hard surface like a desk, table, or even your leg, which opens up new possibilities for discreet communication on the go.
Our neural networks are trained on data from thousands of consenting research participants, which makes them highly accurate at decoding subtle gestures across a wide range of people without the need for individual calibration. And although our generalized models work well out of the box, even a small amount of personalization based on limited individual data can improve handwriting recognition accuracy by up to 16%—in other words, an sEMG wristband can adapt to you and deliver better performance over time.
We believe this technology is the best that’s been developed by anyone to let you control your devices in a seamless, intuitive, and adaptable way that can be used by most people.
- It’s completely non-invasive, opening up new ways to use muscle signals to interact with computers while solving many of the problems facing other forms of HCI.
- It’s convenient, simple, and natural to use—and it works in situations where alternatives like voice interactions may be impractical or undesirable, like sending a private message out in public.
- It’s always available, and removes the need for bulky accessories that pull you out of the moment and distract you from the people and things that matter most.
Perhaps most exciting, our paper in Nature gives the broader scientific community a blueprint to create neuromotor interfaces of their own. In addition to a set of important design rules and best practices across hardware, experimental design, data requirements, and modeling, we’re publicly releasing a dataset containing over 100 hours of sEMG recordings from over 300 research participants across three distinct tasks. Along with our previously open sourced sEMG datasets for pose estimation and surface typing, our hope is that today’s release will help accelerate future work by academics and researchers in the field.
Over time, sEMG could revolutionize how we interact with our devices, help people with motor disabilities gain new levels of independence while improving their quality of life, and unlock new possibilities for HCI that we haven’t even dreamt of yet.
It may just prove to be the perfect input for virtually any device.


