For many years, neuroengineers have pursued the ambition of re-establishing communication for individuals isolated by language barriers due to severe medical conditions. Illnesses such as amyotrophic lateral sclerosis (ALS) progressively weaken respiratory muscles, while strokes can damage the neural pathways vital for speech. Scientists envisioned a future where implanted electrodes could capture the brain's electrical signals and transform them into spoken words, offering a revolutionary means of expression.
A recent scientific endeavor has achieved a monumental leap towards realizing this ambitious goal. Previous work successfully decoded signals associated with attempted speech. Now, in a study published in the journal Cell, the research team demonstrated that their computational system could often accurately interpret words merely imagined by participants. Christian Herff, a neuroscientist from Maastricht University, lauded this development as a remarkable progression that not only advances technology but also deepens our comprehension of language's complexities.
This latest study emerges from the ongoing BrainGate2 clinical trial, which has already yielded impressive results. One participant, Casey Harrell, now utilizes a brain-machine interface to engage in conversations with his loved ones. Diagnosed with ALS, which rendered his speech incomprehensible by 2023, Mr. Harrell underwent surgery to implant electrode arrays in the motor cortex of his brain, the region responsible for generating speech commands.
The implanted electrodes meticulously recorded Mr. Harrell's brain activity as he attempted to articulate words. Through the integration of artificial intelligence, the system eventually achieved an astounding 97.5 percent accuracy in predicting nearly 6,000 words. Furthermore, it could synthesize these words using Mr. Harrell's original voice, reconstructed from prior recordings. However, these successes prompted critical questions about mental privacy: could the system inadvertently access thoughts not intended for verbalization? Researchers, including Stanford neuroscientist Erin Kunz, sought to understand if there was a risk of decoding unintended words and whether "inner speech" could offer a less fatiguing communication alternative for patients.
Dr. Kunz hypothesized that decoding inner speech could alleviate the physical exertion associated with attempted verbalization, enabling extended use of the communication system. Yet, the feasibility of decoding inner speech remained uncertain, partly because there is no universal scientific consensus on its nature. The brain's language network, a complex system roughly the size of a large strawberry, is involved in forming thoughts and converting them into spoken words, sign language, or text. Many individuals experience their thoughts as an "inner voice," raising questions about its role in cognitive processes.
While some theories posit language as fundamental to thought, other studies suggest that much of human cognition occurs independently of linguistic constructs, viewing the inner voice as a spontaneous internal monologue. Dr. Evelina Fedorenko of M.I.T. noted the diversity of human experience regarding inner speech. Intrigued, Dr. Kunz and her team investigated the brain signals produced when participants imagined words compared to when they attempted to speak them. They found that imagined words generated similar, albeit weaker, activity patterns, allowing the computer to predict intended words with varying success rates among participants. Subsequent training specifically on inner speech significantly enhanced the system's performance, enabling accurate decoding of entire imagined sentences.
Although the current inner speech decoding capabilities are not yet sufficient for fluid conversation, Dr. Kunz considers the results a crucial proof-of-concept. She remains optimistic about the future, noting that more recent, unpublished trials have shown further improvements in accuracy and speed. A significant ethical concern emerged when researchers occasionally detected words that participants were not consciously imagining for verbal output. For instance, during a task involving counting colored shapes, the system sometimes picked up number words, suggesting it could access silent mental processes.
These findings, particularly the unintended decoding of silent counting, highlight the profound link between language and thought for some individuals, as observed by Dr. Herff. To address mental privacy concerns, Dr. Kunz's team devised two potential safeguards. One strategy involves programming the system to exclusively decode attempted speech, distinguishing it from inner speech. The other solution proposes an "inner password" to activate and deactivate decoding, ensuring that personal thoughts remain private. For this, they humorously selected "Chitty Chitty Bang Bang."
A 68-year-old ALS patient successfully used "Chitty Chitty Bang Bang" as an internal password, demonstrating the feasibility of user-controlled privacy with a 98.75% accuracy rate. Cohen Marcus Lionel Brown, a bioethicist, praised this ethical advancement, emphasizing its potential to grant patients greater autonomy over their shared information. Despite these successes, Dr. Fedorenko, while acknowledging the study's methodological rigor, questioned the extent to which implants could truly "eavesdrop" on all thoughts, suggesting that much spontaneous thought may not conform to well-formed linguistic structures.