This is awesome
They used three different subjects for the experiments (incidentally, they were part of the research team because it requires to be inside a functional Magnetic Resonance Imaging system for hours at time and nobody wanted that job). Inside the machine, they were exposed to two different groups of Hollywood movie trailers as the fMRI system recorded the brain’s blood flow through their brains’ visual cortex.
The readings were fed into a computer program, in which they were divided into three-dimensional pixels units called voxels (volumetric pixels). This process effectively decodes the brain signals generated by moving pictures, connecting the shape and motion information from the movies to specific brain activity. As the sessions progressed, the computer kept learning about how the visual patterns presented on the screen corresponded to the brain activity.
After recording this information, the activity from the second group of clips was used to reconstruct the videos shown to the subjects on a computer screen. The computer analyzed 18 million seconds of random YouTube video, building a database of potential brain activity for each clip. From all these videos, the software picked the one hundred clips that looked more similar to the ones the subject watched, combining them into the final movie. Although the movie is low res and blurry, it clearly matched the actual clips watched by the subjects.