Translating Human Perception Into Visual Images (Think ‘Brainstorm’)
If you missed it yesterday, you should really listen to Professor Jack Gallant’s conversation with Michael Krasny on KQED Radio’s Forum show. Gallant is a professor in the Psychology and Neuroscience Programs in Bioengineering, Biophysics and Vision Science at UC Berkeley. He and his lab caused a stir this week when reports broke that, as the San Jose Mercury News describes…
“(S)cientists…have designed a way to decode, then re-create, human perception — a breakthrough that someday could be used to reproduce dreams, fantasies, memories and the other images from inside our heads…
The scientists acted as subjects, sitting inside a functional magnetic resonance imaging, or fMRI, scanner for hours at a time. While they watched Hollywood movie trailers, the fMRI scanner measured blood flow through their visual cortex, the part of the brain that processes visual information.
This brain activity was fed into a computer program that learned, second by second, to associate visual patterns in the movie with the corresponding brain activity.
Then the images were reconstructed. This was done by feeding 5,000 hours of random YouTube videos into the computer program so that it could predict the brain activity that each film clip would most likely evoke in each subject.
And if you haven’t seen these videos yet, take a look.
- “The left clip is a segment of a Hollywood movie trailer that the subject viewed while in the magnet. The right clip shows the reconstruction of this segment from brain activity measured using fMRI.”
- “The movie that each subject viewed while in the magnet is shown at upper left. Reconstructions for three subjects are shown in the three rows at bottom. All these reconstructions were obtained using only each subject’s brain activity and a library of 18 million seconds of random YouTube video that did not include the movies used as stimuli.”