Someday, you might watch your own dreams on YouTube

UC Berkeley scientists decoded visual experiences in people’s brains and reconstructed them into YouTube videos.

Using functional Magnetic Resonance Imaging (fMRI) and computational models, UC Berkeley researchers have succeeded in decoding “visual experiences” in people’s brains and reconstructing them into videos.

This is the first step in a process that might someday let you record and reconstruct your own dreams on a computer screen, these researchers believe.

Professor Jack Gallant, a UC Berkeley neuroscientist and coauthor of the study published online September 22, 2011 the journal Current Biology, said:

This is a major leap toward reconstructing internal imagery. We are opening a window into the movies in our minds.

Shinji Nishimoto is lead author of the study and a post-doctoral researcher in the Gallant Lab at UC Berkeley. He and two other research team members served as test subjects for the experiment, because the procedure required volunteers to remain still inside the MRI scanner for hours at a time. The left clip of the video above is a segment of a movie that subjects viewed while inside the MRI scanner. The right clip, above, shows a reconstruction of this movie – created using brain activity from the test subjects – as measured using fMRI. There’s also another video here.

How did they do it? It’s a bit confusing because the test subjects watched movies to, well, create movies. Still, the same process should apply no matter what sort of visual experience you’re having. In this experiment, the reconstruction was obtained using three ingredients: each subject’s brain activity, a library of 18 million seconds of random YouTube video clips, and a computer program that was able to link the two together. If you want to learn the details of process, read this press release from UC Berkeley or, better yet, this super article in Gizmodo by Jesus Diaz, who wrote:

This process effectively decodes the brain signals generated by moving pictures, connecting the shape and motion information from the movies to specific brain actions. As the sessions progressed, the computer learned more and more about how the visual activity presented on the screen corresponded to the brain activity … After recording this information, another group of clips was used to reconstruct the videos shown to the subjects.

The scientists suggest:

Imagine tapping into the mind of a coma patient, or watching one’s own dream on YouTube.

Ultimately, Nishimoto said, scientists need to understand how the brain processes dynamic visual events that we experience in everyday life. He said:

We need to know how the brain works in naturalistic conditions. For that, we need to first understand how the brain works while we are watching movies.

Indeed.

Bottom line: Scientists in the Gallant Lab at UC Berkeley used functional Magnetic Resonance Imaging (fMRI) and computational models to decode visual experiences in people’s brains and reconstruct them into videos. They say this might be the first step in a process that will ultimately let us see what is happening in the mind of a coma patient – or watch our own dreams on YouTube.

Your brain is hard-wired to notice animals

Holograms reveal brains’ inner workings

Deborah Byrd

MORE ARTICLES