Scientists Can’t Read Your Mind With Brain Scans (Yet)

As a journalist who writes about neuroscience, I’ve gotten a lot of super enthusiastic press releases touting a new breakthrough in using brain scans to read people’s minds. They usually come from a major university or a prestigious journal. They make it sound like a brave new future has suddenly arrived, a future in which brain scans advance the cause of truth and justice and help doctors communicate with patients whose minds are still active despite their paralyzed bodies.
Amazing, right? Drop everything and write a story!
Well, not so fast. Whenever I read these papers and talk to the scientists, I end up feeling conflicted. What they’ve done–so far, anyway–really doesn’t live up to what most people have in mind when they think about mind reading. Then again, the stuff they actually can do is pretty amazing. And they’re getting better at it, little by little.

In pop culture, mind reading usually looks something like this: Somebody wears a goofy-looking cap with lots of wires and blinking lights while guys in white lab coats huddle around a monitor in another room to watch the movie that’s playing out in the person’s head, complete with cringe-inducing internal monologue.

We are not there yet.

“We can decode mental states to a degree,” said John-Dylan Haynes, a cognitive neuroscientist at Charité-Universitätsmedizin Berlin. “But we are far from a universal mind reading machine. For that you would need to be able to (a) take an arbitrary person, (b) decode arbitrary mental states and (c) do so without long calibration.”

“To me, mind reading is where something is wholly subjective and private, and I can’t tell from what you’re doing or looking at, what your mental state is,” said Frank Tong, a neuroscientist at Vanderbilt University. Tong draws a distinction between that kind of mind reading, and what he calls brain reading, which essentially involves using brain scans to figure out what’s on someone’s mind in situations where you could accomplish the same thing by simply looking over their shoulder or waiting a few seconds to see what they do next.

Most of the research to date falls in this second category.

Here’s a recent example. A team led by Marvin Chun, a cognitive neuroscientist at Yale, published a study last month in which they used brain scans to reconstruct pictures of faces that the subjects had been looking at during the scan. On one hand, whoa. Using machines and computers they produced a something like a printout of what people saw (see below). On the other hand, the researchers carefully controlled what the subjects saw.

In broad strokes, here’s what the Yale researchers did. They created mathematical descriptions of 300 images of faces. All were portraits, shot from the same angle. Then they did fMRI scans on six people to record the pattern of brain activity elicited by each of those 300 faces. Next, they fed those patterns of brain activity into a statistical matching algorithm they’d developed to serve as a kind of translator. After it’s been “trained” on lots of examples, the translator can look at a pattern of brain activity and predict the image that produced it.

To test the translator, the researchers re-scanned the same subjects as they looked at 30 new faces that weren’t in the original set of 300, and the computer created a reconstruction of each face it thinks the person saw. To find out how good these reconstructions were, the researchers recruited 261 people via Amazon’s Mechanical Turk and had them match up reconstructed images with originals. They got about 60 to 70 percent right, Chun says: better than chance, but far from perfect.

“In a sense I think it is a form of mind reading, because the computer algorithm doesn’t know what the person saw,” Chun said. The scans and the algorithms can’t yet reconstruct any old image that pops up in someone’s mind, though. “The step that has to happen next is doing this with imagined faces or faces recalled from memory,” Chun said. “Then it would be true mind reading.”

It’s an important distinction. Most “mind-reading” studies so far have focused on the here and now. “Vision is by far the easiest thing to work with,” said Jack Gallant, a neuroscientist at the University of California, Berkeley. Gallant’s lab has done some of the most eye-catching work in this area, including a 2011 study that used fMRI scans to decode video imagery as people watched clips cut from Hollywood films (see video below). That’s partly because scientists have a relatively good idea of how visual information is represented in the brain (compared to, say, abstract thought), and partly because it’s easy to control what someone sees during an experiment.

The next step up in degree of difficulty, in Gallant’s view, and perhaps a baby step closer to the pop-culture concept of mind reading, is predicting what people are about to do. Scientists have had some success in doing this–in simple circumstances. In a 2008 study, for example, Haynes and colleagues asked people to arbitrarily press a button with either their left or right hand. The subjects’ brain activity gave their choice away seconds before they became conscious of it themselves.


An even bigger step towards “true mind reading,” as Chun says, would be decoding mental images, the kind of pictures you see in your mind’s eye as you’re staring off into space, or falling asleep.

Last year, Yukiyasu Kamitani and colleagues at the Advanced Telecommunications Research Institute International in Kyoto, Japan used fMRI scans to determine what objects people were dreaming about as they fell asleep (they confirmed this by waking them up and asking them, hundreds of times over the course of the study). The dream decoding was pretty rudimentary though. They could tell someone was dreaming about a car, for instance, but not what kind of car.

“The ultimate device for decoding mental imagery would be a device for decoding your internal monologue,” said Gallant. “It would be like talking to Siri except without even talking.” In some ways that might be easier to pull off than it sounds, Gallant says. “Language is a statistically constrained signal, the number of words you could think is a lot less than the number of images you could see.”

If decoding what people see and what they’re just about to do next is where the field is now, and decoding mental imagery is what’s on the horizon, Gallant says there’s yet another type of decoding that’s more like the distant frontier: decoding old memories. ”If I ask you to picture your first grade teacher, you might be able to recall his or her name and call up a pretty rough mental image,” he said. “That information is buried in your brain, but you probably hadn’t thought about it for years,” Gallant said. Scientists don’t understand how old memories are encoded in our brains well enough to decode them, but some day they might.

If you’re starting to feel a panic attack coming on, take a deep breath and relax. Scientists are very, very far from being able to dredge up those best-forgotten memories from your grade school days (or worse, junior high). They don’t even want to.

The technology is still pretty limited. And there are other obstacles, such as individual differences. “Different peoples’ brains code information slightly differently, so you need to learn how a specific individual codes their mental states,” said Haynes. “There is only limited transfer from person to person.” Moreover, at least for the foreseeable future, brain decoding will require full cooperation (not to mention considerable patience) on the part of the subject.

The researchers doing this work are more interested in the scientific question of how the brain encodesthings like perception, memory, and emotion than in the mad scientist pursuit of decoding people’s thoughts; it’s just that they’re two sides of the same coin. As scientists get better at one, they get better at the other.

As they do, their work will inevitably raise some tricky social and ethical questions.

Gallant worries about the implications for mental privacy, even though he thinks any truly worrying thought-stealing technology is still decades away. ”I tend to be a pretty paranoid person,” he said. “As a scientist I’m not sure what to do other than to tell people we need to start thinking about this because somewhere down the road we’re going to be able to do it really well.”

In the meantime, we should keep our expectations in check–and keep a cautious eye on the future.
Courtesy: Wired
Share on Google Plus
    Blogger Comment
    Facebook Comment