Observing the brain at work has long been a dream of neuroscientists. A dream that could now come true. New deep-learning algorithms are now decoding exactly what appears on brain scans.
Scientists have long been after the images whirling around in our heads. With the help of functional magnetic resonance imaging (fMRT), for example. But the scans they have produced so far have not led to “considerable” results. Machine steering through brain communication is, what suitability for daily use is concerned, still in its beginnings. But this could now change. Researchers at the Chinese Academy of Sciences are developing algorithms for neural networks that are achieving amazing things, such as the reconstruction of viewed images with the help of brain scans. And that, with a level of precision that until now has never existed.
Standing at the center of this is the visual cortex – the “seeing” part of our brains. When you read articles like this, for example, complex three-dimensional patterns go off in that part of the brain and are always changing, depending on what letter of the alphabet you happen to be seeing.
The first step involves making these 3D patterns visible with the help of fMRT and recording the results. The concept is simple: Activated cerebral areas react to oxygen-enriched blood flows. When neural cells are exposed to oxygen, the magnet properties of the transporter molecule hemoglobin begin to alter. This allows the fMRT to isolate precisely where the oxygen-rich blood is flowing.
Telepathic deep learning
During the next step, an algorithm for deep learning – also known as the Deep Generative Multiview Model (DGMM) – is fed a sufficient amount of brain scans in addition to the actual images that were viewed by the subjects. This allows them to learn on their own how to correlate the 3D brain patterns with their respective 2D drawings.
The Chinese researchers even used 1,800 fMRT scans from subjects in earlier projects studying letters and numbers. Or rather 90 percent of them. For the ten remaining percent of the scans, the algorithm had to show what it “learned” by sketching what the trial subjects might have feasibly seen.
The results were quite impressive. The neural network could perfectly recognize the handwritten letters and numbers. But the data so far has only been derived from a handful of trial subjects. It has to first be seen what influence a wider subject base will have on the algorithm’s success rate. But the researchers have already made great strides, and next they want to try to “crack” videos. An incomparably difficult undertaking.
The next step would then naturally be that we can finally look directly into the “self.” Dreams, for example – for which fMRT has already been implemented. But this has not proven very fruitful. It is possible, however, that algorithms like the DGMM will aid such procedures in the future.