AI can now read what we’re thinking

AI can now read what we’re thinking

For the first time, researchers have managed to use GPT1, precursor to the AI chatbot ChatGPT, to translate MRI imagery into text in an effort to understand what someone is thinking.

This recent breakthrough allowed researchers at the University of Texas at Austin to “read” someone’s thoughts as a continuous flow of text, based on what they were listening to, imagining or watching.

It raises significant concerns for privacy, freedom of thought, and even the freedom to dream without interference. Our laws are not equipped to deal with the widespread commercial use of mind-reading technology – freedom of speech law does not extend to the protection of our thoughts.

Participants in the Texas study were asked to listen to audiobooks for 16 hours while inside an MRI scanner. At the same time, a computer “learnt” how to associate their brain activity from the MRI with what they were listening to. Once trained, the decoder could generate text from someone’s thoughts while they listened to a new story, or imagined a story of their own.

According to the researchers, the process was labor intensive and the computer only managed to get the gist of what someone was thinking. However, the findings still represent a significant breakthrough in the field of brain-machine interfaces that, up to now, have relied on invasive medical implants. Previous non-invasive devices could only decipher a handful of words or images.

Here’s an example of what one of the subjects was listening to (from an audiobook):

I got up from the air mattress and pressed my face against the glass of the bedroom window, expecting to see eyes staring back at me but instead finding only darkness.

And here’s what the computer “read” from the subject’s brain activity:

I just continued to walk up to the window and open the glass I stood on my toes and peered out I didn’t see anything and looked up again I saw nothing.

The study participants had to cooperate to both train and apply the decoder, so that the privacy of their thoughts was maintained. However, the researchers warn that “future developments might enable decoders to bypass these requirements”. In other words, mind-reading technology could one day be applied to people against their will.

Future research may also speed up the training and decoding process. While it took 16 hours to train the machine to read what someone was thinking in the current version, this will significantly decrease in future updates. And as we have seen with other AI applications, the decoder is also likely to get more accurate over time.

There’s another reason this represents a step-change. Researchers have been working for decades on brain-machine interfaces in a race to create mind-reading technologies that can perceive someone’s thoughts and turn them into text or images. But typically, this research has focused on medical implants, with the focus on helping the disabled speak their thoughts.

Neuralink, the neurotechnology company founded by Elon Musk, is developing a medical implant that can “let you control a computer or mobile device anywhere you go.” But the need to undergo brain surgery to have a device implanted in you is likely to remain a barrier to the use of such technology.

The improvements in accuracy of this new non-invasive technology could make it a game changer, however. For the first time, mind-reading technology looks viable by combining two technologies that are readily available – albeit with a hefty price tag. MRI machines currently cost anywhere between US$150,000 and US$1 million.

Legal and ethical ramifications

Data privacy law currently does not consider thought as a form of data. We need new laws that prevent the emergence of thought crime, thought data breaches, and even one day, perhaps, the implantation or manipulation of thought.

Going from reading thought to implanting it may take a long time yet, but both require pre-emptive regulation and oversight.

Open-plan office
Misuse of the technology could allow employers to exert new levels of control over workers. Image: Monkey Business Images / Shutterstock via The Conversation

Researchers from the University of Oxford are arguing for a legal right to mental integrity, which they describe as:

A right against significant, non-consensual interference with one’s mind.

Others are beginning to defend a new human right to freedom of thought. This would extend beyond traditional definitions of free speech, to protect our ability to ponder, wonder and dream.

A world without regulation could become dystopian very quickly. Imagine a boss, teacher or state official being able to invade your private thoughts – or worse, being able to change and manipulate them.

We are already seeing eye-scanning technologies being deployed in classrooms to track students’ eye movements during lessons, to tell if they’re paying attention. What happens when mind-reading technologies are next?

Similarly, what happens in the workplace when employees are no longer allowed to think about dinner, or anything outside of work? The level of abusive control of workers could exceed anything previously imagined.

George Orwell wrote convincingly of the dangers of “Thoughtcrime”, where the state makes it a crime to merely think rebellious thoughts about an authoritarian regime. The plot of Nineteen Eighty-Four, however, was based on state officials reading body language, diaries or other external indications of what someone was thinking.

With new mind-reading technology, Orwell’s novel would become very short indeed – perhaps even as short as a single sentence:

Winston Smith thought to himself: “Down with Big Brother” – following which, he was arrested and executed.

Joshua Krook is Research Fellow in Responsible Artificial Intelligence, University of Southampton

This article is republished from The Conversation under a Creative Commons license. Read the original article.