Scientists created a computer interface that can pick up on internal verbalizations based on neuromuscular signals in the jaw and face.
Researchers at MIT's Media Lab announced the creation of AlterEgo, a computer system that can transcribe the words you say in your head, according to an MIT News report. Using hardware that can detect neuromuscular signals in the jaw and face through electrodes, the system can pick up on things that are "undetectable to the human eye."
The system has tied specific neural signals to certain words, the report said, allowing it to decipher the minuscule physical messages your body sends when you internally verbalize something.
Other professors and researchers say this technology could be applied in a number of ways. Thad Starner, a professor at Georgia Tech's College of Computing, told MIT's News Office that the tech would be great in any situation where people need to communicate clearly in loud environments, like airport tarmacs or soldiers and police in tactical situations.
"You can imagine all these situations where you have a high-noise environment, like the flight deck of an aircraft carrier, or even places with a lot of machinery, like a power plant or a printing press," Starner told MIT News. "This is a system that would make sense, especially because oftentimes in these types of or situations people are already wearing protective gear."
Part of the researchers' goal for the project was to make wearable technology that could understand minute signals and create a system where artificial intelligence (AI) worked to enhance the human mind, according to the report.
"The motivation for this was to build an IA device — an intelligence-augmentation device," Arnav Kapur, an MIT graduate student told the campus publication. Mr. Kapur lead the research and development of the system. "Our idea was: Could we have a computing platform that's more internal, that melds human and machine in some ways and that feels like an internal extension of our own cognition?"
Kapur and his thesis advisor, media arts and sciences professor Pattie Maes, said many people are inextricably attached to their smartphones, for better or for worse. Their research team was interested in finding a way to make the vast amount of information on the internet easily accessible and less cumbersome.
"At the moment, the use of those devices is very disruptive. If I want to look something up that's relevant to a conversation I'm having, I have to find my phone and type in the passcode and open an app and type in some search keyword, and the whole thing requires that I completely shift attention from my environment and the people that I'm with to the phone itself," Maes told MIT News.
Instead, Maes and her students have been working on tech tools that can allow a user to access all the information available online while remaining "in the present," she told MIT News.
They initially tested the software during a chess game, with the user silently verbalizing his opponents' moves and an AI algorithm responding with moves the user should make. The devices are constantly learning, correlating more neuromuscular signals with more words and phrases.
The team behind AlterEgo first needed to figure out which part of the face and jaw had the strongest signals so they knew where to put the device. In their paper on the study, they describe the prototype as "a wearable silent-speech interface, which wraps around the back of the neck like a telephone headset and has tentacle-like curved appendages that touch the face at seven locations on either side of the mouth and along the jaws."
Tests indicated that they could get the same results with fewer electrodes on only one side of the face. Further experiments found that, on average, the system transcribed words accurately 92% of the time, the report said. As the device and AI learn more human speech, the accuracy will increase, Kapur said, noting that his own device, which he had been using extensively, had a higher accuracy rate than those used for brief periods by test subjects.
In addition to communication in loud environments, Professor Starner wondered whether the technology could be used for those with speaking disabilities or those who have suffered an illness that ends their ability to speak.
"I think that they're a little underselling what I think is a real potential for the work," Starner told MIT News. "The last one is people who have disabilities where they can't vocalize normally. For example, Roger Ebert did not have the ability to speak anymore because lost his jaw to cancer. Could he do this sort of silent speech and then have a synthesizer that would speak the words?"
*this article was featured on the Tech Republic website on April 6, 2018: https://www.techrepublic.com/article/mit-researchers-develop-tech-to-transcribe-the-words-youre-thinking/