• AI Today
  • Posts
  • wearable interface – called EchoSpeech

wearable interface – called EchoSpeech

Cornell University researchers have developed a silent-speech recognition interface that uses acoustic sensing and artificial intelligence to continuously recognize up to 31 unvocalized commands, based on lip and mouth movements.

The low-power, wearable interface – called EchoSpeech – requires just a few minutes of user training data before it will recognize commands and can be run on a smartphone.

Ruidong Zhang, the doctoral student of information science, is the lead author of “EchoSpeech: Continuous Silent Speech Recognition on Minimally-obtrusive Eyewear Powered by Acoustic Sensing,” which will be presented at the Association for Computing Machinery Conference on Human Factors in Computing Systems (CHI) this month in Hamburg, Germany.

“For people who cannot vocalize sound, this silent speech technology could be an excellent input for a voice synthesizer. It could give patients their voices back,” Zhang said of the technology’s potential use with further development.

In its present form, EchoSpeech could be used to communicate with others via smartphone in places where speech is inconvenient or inappropriate, like a noisy restaurant or quiet library. The silent speech interface can also be paired with a stylus and used with design software like CAD, all but eliminating the need for a keyboard and a mouse.

The low-power, wearable interface – called EchoSpeech – requires just a few minutes of user training data before it will recognize commands and can be run on a smartphone. Image is in the public domain

Outfitted with a pair of microphones and speakers smaller than pencil erasers, the EchoSpeech glasses become a wearable AI-powered sonar system, sending and receiving soundwaves across the face and sensing mouth movements. A deep learning algorithm then analyzes these echo profiles in real time, with about 95% accuracy.

“We’re moving sonar onto the body,” said Cheng Zhang, assistant professor of information science and director of Cornell’s Smart Computer Interfaces for Future Interactions (SciFi) Lab.

“We’re very excited about this system,” he said, “because it really pushes the field forward on performance and privacy. It’s small, low-power and privacy-sensitive, which are all important features for deploying new, wearable technologies in the real world.”

Credit: Ruidong Zhang