Neural portraits of perception: Reconstructing face images from evoked brain activity
Volume 94, 1 July 2014, Pages 12–22
What if you could "see" if your partner was cheating on you and with whom?
What if could see your colleague's bright idea that might get he or she and not you promoted ?
What if you could see inside someone else's dreams?
Recent neuroimaging advances have allowed visual experience to be reconstructed from patterns of brain activity.
While neural reconstructions have ranged in complexity, they have relied almost exclusively on retinotopic mappings between visual input and activity in early visual cortex.
However, subjective perceptual information is tied more closely to higher-level cortical regions that have not yet been used as the primary basis for neural reconstructions.
Furthermore, no reconstruction studies to date have reported reconstructions of face images, which activate a highly distributed cortical network. Thus, we investigated
(a) whether individual face images could be accurately reconstructed from distributed patterns of neural activity, and
(b) whether this could be achieved even when excluding activity within occipital cortex. Our approach involved four steps.
(1) Principal component analysis (PCA) was used to identify components that efficiently represented a set of training faces.
(2) The identified components were then mapped, using a machine learning algorithm, to fMRI activity collected during viewing of the training faces
(3) Based on activity elicited by a new set of test faces, the algorithm predicted associated component scores.
(4) Finally, these scores were transformed into reconstructed images.
Using both objective and subjective validation measures, we show that our methods yield strikingly accurate neural reconstructions of faces even when excluding occipital cortex. This methodology not only represents a novel and promising approach for investigating face perception, but also suggests avenues for reconstructing ‘offline’ visual experiences—including dreams, memories, and imagination—which are chiefly represented in higher-level cortical areas.
While neural reconstructions have ranged in complexity, they have relied almost exclusively on retinotopic mappings between visual input and activity in early visual cortex.
However, subjective perceptual information is tied more closely to higher-level cortical regions that have not yet been used as the primary basis for neural reconstructions.
Furthermore, no reconstruction studies to date have reported reconstructions of face images, which activate a highly distributed cortical network. Thus, we investigated
(a) whether individual face images could be accurately reconstructed from distributed patterns of neural activity, and
(b) whether this could be achieved even when excluding activity within occipital cortex. Our approach involved four steps.
(1) Principal component analysis (PCA) was used to identify components that efficiently represented a set of training faces.
(2) The identified components were then mapped, using a machine learning algorithm, to fMRI activity collected during viewing of the training faces
(3) Based on activity elicited by a new set of test faces, the algorithm predicted associated component scores.
(4) Finally, these scores were transformed into reconstructed images.
Using both objective and subjective validation measures, we show that our methods yield strikingly accurate neural reconstructions of faces even when excluding occipital cortex. This methodology not only represents a novel and promising approach for investigating face perception, but also suggests avenues for reconstructing ‘offline’ visual experiences—including dreams, memories, and imagination—which are chiefly represented in higher-level cortical areas.
No comments:
Post a Comment