Translate to multiple languages

Subscribe to my Email updates

https://feedburner.google.com/fb/a/mailverify?uri=helgeScherlundelearning
Enjoy what you've read, make sure you subscribe to my Email Updates

Sunday, May 07, 2017

Mind Reading Algorithms Reconstruct What You’re Seeing Using Brain Scan Data | MIT Technology Review

"Perceived images are hard to decode from fMRI scans. But a new kind of neural network approach now makes it easier and more accurate." notes Emerging Technology from the arXiv.

A comparison of brain-image reconstruction techniques. The original images are shown in the top row, while the results of the new deep generative multivew model are shown in the bottom row.

One of the more interesting goals in neuroscience is to reconstruct perceived images by analyzing brain scans. The idea is to work out what people are looking at by monitoring the activity in their visual cortex.

The difficulty, of course, is finding ways to efficiently process the data from functional magnetic resonance imaging (fMRI) scans. The task is to map the activity in three-dimensional voxels inside the brain to two-dimensional pixels in an image.

That turns out to be hard. fMRI scans are famously noisy, and the activity in one voxel is well known to be influenced by activity in other voxels. This kind of correlation is computationally expensive to deal with; indeed, most approaches simply ignore it. And that significantly reduces the quality of the image reconstructions they produce.  

So an important goal is to find better ways to crunch the data from fMRI scans and so produce more accurate brain-image reconstructions.

Today, Changde Du at the Research Center for Brain-Inspired Intelligence in Beijing, China, and a couple of pals, say they have developed just such a technique. Their trick is to process the data using deep-learning techniques that handle nonlinear correlations between voxels more capably. The result is a much better way to reconstruct the way a brain perceives images.

Changde and co start with several data sets of fMRI scans of the visual cortex of a human subject looking at a simple image—a single digit or a single letter, for example. Each data set consists of the scans and the original image.

The task is to find a way to use the fMRI scans to reproduce the perceived image. In total, the team has access to over 1,800 fMRI scans and original images.

They treat this as a straightforward deep-learning task. They use 90 percent of the data to train the network to understand the correlation between the brain scan and the original image.
They then test the network on the remaining data by feeding it the scans and asking it to reconstruct the original images.
Read more...

Additional resources
Ref: arxiv.org/abs/1704.07575

Source: MIT Technology Review