Imagine the panic. Fire alarms blare. Smoke fills the room, and you’re left only with the sense of touch, feeling desperately along walls as you try to find the doorway.
Now imagine technology guiding you by sense of touch. Your smartwatch, alerted by the same alarms, begins “speaking” through your skin, giving directions with coded vibrations, squeezes and tugs with meanings as clear as spoken words.
That scenario could play out in the future thanks to technology under development in the laboratory of Rice mechanical engineer Marcia O’Malley, who has spent more than 15 years studying how people can use haptic sense to interact with technology—be it robots, prosthetic limbs or stroke-rehabilitation software.
“Skin covers our whole body and has many kinds of receptors in it, and we see that as an underutilized channel of information,” said O’Malley, director of the Rice Robotics Initiative and Rice’s Mechatronics and Haptic Interfaces Laboratory (MAHI).
Emergency situations like the fire scenario described above are just one example. O’Malley said there are many “other situations where you might not want to look at a screen, or you already have a lot of things displayed visually. For example, a surgeon or a pilot might find it very useful to have another channel of communication.”
With new funding from the National Science Foundation, O’Malley and Stanford University collaborator Allison Okamura will soon begin designing and testing soft, wearable devices that allow direct touch-based communications from nearby robots. The funding, which is made possible by the National Robotics Initiative, is geared toward developing new forms of communication that bypass visual clutter and noise to quickly and clearly communicate.
“Some warehouses and factories already have more robots than human workers, and technologies like self-driving cars and physically assistive devices will make human-robot interactions far more common in the near future,” said O’Malley, Rice’s Stanley C. Moore Professor of Mechanical Engineering and professor of both computer science and electrical and computer engineering.
Soft, wearable devices could be part of a uniform, like a sleeve, glove, watchband or belt. By delivering a range of haptic cues—like a hard or soft squeeze, or a stretch of the skin in a particular direction and place, O’Malley said it may be possible to build a significant “vocabulary” of sensations that carry specific meanings.
“I can see a car’s turn signal, but only if I’m looking at it,” O’Malley said. “We want technology that allows people to feel the robots the around them and to clearly understand what those robots are about to do and where they are about to be. Ideally, if we do this correctly, the cues will be easy to learn and intuitive.”
For example, in a study presented this month at the International Symposium on Wearable Computers (ISWC) in Singapore, MAHI graduate student Nathan Dunkelberger showed that users needed less than two hours of training to learn to “feel” most words that were transmitted by a haptic armband. The MAHI-developed “multi-sensory interface of stretch, squeeze and integrated vibrotactile elements,” or MISSIVE, consists of two bands that fit around the upper arm. One of these can gently squeeze, like a blood-pressure cuff, and can also slightly stretch or tug the skin in one direction. The second band has vibrotactile motors—the same vibrating alarms used in most cellphones—at the front, back, left and right sides of the arm.
Using these cues in combination, MAHI created a vocabulary of 23 of the most common vocal sounds for English speakers. These sounds, which are called phonemes, are used in combination to make words. For example, the words “ouch” and “chow” contain the same two phonemes, “ow” and “ch,” in different order. O’Malley said communicating with phonemes is faster than spelling words letter by letter, and subjects don’t need to know how a word is spelled, only how it’s pronounced.
Dunkelberger said English speakers use 39 phonemes, but for the proof-of-concept study, he and colleagues at MAHI used 23 of the most common. In tests, subjects were given limited training—just 1 hour, 40 minutes—which involved hearing the spoken phoneme while also feeling it displayed by MISSIVE. In later tests, subjects were asked to identify 150 spoken words consisting of two to six phonemes each. Those tested got 86 percent of the words correct.
“What this shows is that it’s possible, with a limited amount of training, to teach people a small vocabulary of words that they can recall with high accuracy,” O’Malley said. “And there are definitely things we could optimize. We could make the cues more salient. We could refine the training protocol. This was our prototype approach, and it worked pretty well.”
In the NSF project, she said the team will focus not on conveying words, but conveying non-verbal information.
“There are many potential applications for wearable haptic feedback systems to allow for communication between individuals, between individuals and robots or between individuals and virtual agents like Google maps,” O’Malley said. “Imagine a smartwatch that can convey a whole language of cues to you directly, and privately, so that you don’t have to look at your screen at all!”
The ISWC study was supported by Facebook. Additional study co-authors include Jenny Sullivan, Joshua Bradley, Nickolas Walling, Indu Manickam, Gautam Dasarathy and Richard Baraniuk, all of Rice; and Ali Israr, Frances Lau, Keith Klumb, Brian Knott and Freddy Abnousi, all of Facebook.
Source: Can You Feel What I’m Saying?