AI system pioneers ’˜brain to speech’ technology

AI system pioneers ’˜brain to speech’ technology
A new AI system developed at Columbia University is capable of converting brainwaves to speech

None of us are mind readers, but a new AI system just might be thanks to machine learning algorithms. 

Columbia University Neurologists were able to utilise speech synthesisers with artificial intelligence, monitoring brain activity to reconstruct the words a person hears; decoding signals recorded from the human auditory cortex and translating them into coherent speech.

The aim is to develop a system that allows for the translation of brain waves for patients who are unable to speak, no longer relying on conventional eye-controlled cursor boards.

This allows a vocoder, a computer algorithm that can synthesise speech after being ‘trained’ on recordings of people talking.

Five patients undergoing neurosurgery for epilepsy were studied, using a variety of electrode implants in the patients’ brains to record electrocorticography measurements while patients listened to short stories spoken by four different speakers.

The system is the first of its kind and represents a major breakthrough in neuro-speech technology, but while being a successful proof of concept, there is much progress to be made before full sentences are able to be converted to synthesised speech.

Dr Nima Masgrani, PHD a principal investigator at Columbia University’s Mortimer B. Zuckerman Mind Brain Behaviour Institute said ““With today’s study, we have a potential way to restore that power. We’ve shown that, with the right technology, these people’s thoughts could be decoded and understood by any listener.”