Connect with us

Research

Columbia Researchers Developed Technology That Can Translate Brain Activity Into Words

Neuroengineers from Columbia University reportedly developed a way to use artificial intelligence and speech synthesizers to translate one’s thoughts into words.

The team’s paper, published Tuesday in Scientific Reports, outlines how they created a system that can read the brain activity of a listening patient, then reiterate what that patient heard with a clarity never before seen from such technology.

The breakthrough opens the path for speech neuroprosthetics, or implants, to have direct communications with the brain. Ideally, the technology will someday allow those who have lost their ability to speak to regain a voice. This can help patients who have suffered a stroke or are living with amyotrophic lateral sclerosis (ALS) communicate more easily with loved ones.

“If the wearer thinks ‘I need a glass of water,’ our system could take the brain signals generated by that thought, and turn them into synthesized, verbal speech,” Dr. Nima Mesgarani, the paper’s senior author and a principal investigator at Columbia University’s Mortimer B. Zuckerman Mind Brain Behavior Institute, explained in a statement.

“This would be a game changer,” he continued. “It would give anyone who has lost their ability to speak, whether through injury or disease, the renewed chance to connect to the world around them.”

Right now, the technology can only voice what one’s listened to, not what they wish to say. But further development can improve the system for this purpose.

The technology is based on decades of research that show that when people speak or listen—or even imagine speaking or listening—certain patterns of activity occur in the brain.

To create their technology, Mesgarani and his team worked with Dr. Ashesh Dinesh Mehta, a neurosurgeon at Northwell Health Physician Partners Neuroscience Institute and co-author of the paper who works with epilepsy patients that undergo regular brain surgeries.

“Working with Dr. Mehta, we asked epilepsy patients already undergoing brain surgery to listen to sentences spoken by different people, while we measured patterns of brain activity,” said Mesgarani.

This trained a vocoder—a computer algorithm like Amazon Echo or Apple’s Siri that can learn to recreate speech from recordings. Instead of recordings, the study’s vocoder needed to learn from brain activity.

After the system was familiar with the brain activity, patients were asked to listen to a recording of someone reciting the numbers zero through nine. Their brain activity was recorded and run through the vocoder. The vocoder’s reiteration was analyzed and cleaned up using an artificial intelligence system, and the resulting sound was a robotic voice reciting digits zero through nine.

Subjects were asked to listen to the recording, and were able to “understand and repeat the sounds about 75% of the time, which is well above and beyond any previous attempts,” said Mesgarani. “We’ve shown that, with the right technology, these people’s thoughts could be decoded and understood by any listener.”

Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

%d bloggers like this: