16 December 2008

See thoughts: details

The machine counted the visual image from the human brainmembrana

Shooting colorful dreams in the brain of a sleeper may turn out to be a reality in a few years. Starting with the recognition of simple motor commands in the picture of neuronal activity, scientists have now reached the reading of visual images.

Let these pictures still primitive – but our thoughts gradually cease to be a territory where there is no access to outsiders.

Without inserting any electrodes into the brain, the experimenters learned to clearly determine what the subject sees. Although the images presented to his gaze are still black and white and contain only a hundred rather large pixels (a 10x10 image was used), this is a huge achievement in understanding the "patterns" of neural activity associated with such complex processes as the perception of visual information.

The fantastic experience of Japanese scientists opens the way to recognition in the human brain and those images that he has never seen in reality – dreams or imaginary worlds. Just imagine the work of an artist or designer who just sits in an armchair and, closing his eyes, comes up with images that immediately appear on the computer screen (illustrations from websites pinktentacle.com and chunichi.co.jp ).

Yukiyasu Kamitani and his team from the Computational Neuroscience Laboratory of the Institute of Advanced Telecommunications Research (ATR Computational Neuroscience Laboratories) draw such an amazing prospect.

Together with several scientists from a number of other Japanese institutes and universities, they carried out the world's first visualization of what people see, based on the removal of parameters of brain activity.

In 2006, Kamitani and his colleagues built and tested a curious variation of the Brain Machine Interface (BMI): a person lying in a tomograph ring showed various gestures with his hand, and the computer, relying only on pictures of brain activity, recognized finger movements and issued appropriate commands to the robot hand, which repeated the gestures for a man.

The experience of 2006 was much simpler than the current one. After all, a person had a choice of only three "signs" – "stone", "scissors" or "paper".

Having taught the car to distinguish with good accuracy the patterns of activity of neurons associated with these gestures, the Japanese went further (Honda photos).

The new work is a profound development of that experiment. Only this time scientists focused their attention on the recognition of visual images in the brain. For which, as before, functional magnetic resonance imaging (fMRI) was used.

Of course, no tomograph will see "boats under sail" or "the sun over the river" in a person's head. All it can do is show a change in blood flow through certain areas of the cortex associated with the activity of certain groups of neurons. But by understanding the patterns in such changes, you can learn to perform the reverse transformation – from the excitation of neurons to what caused this reaction – be it voices, thoughts or the same pictures standing in front of your eyes.

This approach, we note, differs from the actively developing parallel direction of mind reading, in which hoops or helmets with electroencephalogram sensors are used. Scientists have already shown how a humanoid robot can be controlled in this way, and some companies have even prepared commercial versions of this type of BMI for the market.

On the one hand, the use of a bulky fMRI scanner (unlike brain wave sensors worn on the head) limits mind-reading experiments to the walls of laboratories (or hospitals), on the other, it allows you to see in much more detail the instantaneous changes in different areas of the cortex caused by a particular stimulus.

Recently, by the way, researchers from the Netherlands have learned to identify traces of individual sounds of speech heard by a person in the picture of brain activity. From this job to "telepathic communication" (which the Pentagon is so eager to get) is a real abyss. But the first steps in this field are also important. So Japanese experimenters led by Yukiyasu explain that even getting 100-pixel black-and–white images "taken out" of the human brain on the screen is just the beginning. But even this experience, if you look into it, is not so simple.

There could be no question of "approximate guessing" by going through all the variants of the pictures and comparing them with the "brain print". This is too unproductive, because even a picture consisting of 100 black or white squares gives 2100 possible combinations in the limit. This means that the machine had to detect in the picture of the activity of neurons almost every pixel of the picture seen by a person separately.

To do this, the computer first had to identify patterns in the response of certain neurons to the presented pictures. To train the machine, the experimenters showed the subjects 440 "stop-pixel" images (randomly generated), for 6 seconds each (with 6-second pauses). The tomograph regularly supplied the computer with drawings of the activity of groups of neurons in the visual cortex (and in three–dimensional space). Then another series of images followed, but no longer with random noise, but with simple geometric shapes or individual letters.

After such training, the program found a correlation between the pixels in the test image and the neurons that turn on. And as far as the compiled "rules" turned out to be correct, it was easy to check.

Firstly, people were presented with a variety of simple drawings (within the same grid of 10 x 10 dots), which "manifested" on the monitor with good reliability. And secondly, the subjects were shown the word Neuron – and it was also regularly reflected on the computer screen.

The result of several tests of the technology on two subjects.

At the top – the presented pictures.

Below are the raw reconstructed images, each obtained after one scan.

The lowest row is the averaged restored pictures (illustration by Yukiyasu Kamitani et.al .).

The key to success was the construction of models of the response of groups of neurons on different scales for the same picture. That is, after receiving a signal from the tomograph, the program split a hypothetical field of 10 x 10 pixels (which it had to fill) into overlapping zones of different sizes (1 x 1 pixel, 1 x 2 pixels, 2 x 1, 2 x 2, and so on). Then, using her templates, she determined what is the probability that a given group of pixels is white, black, or a combination of these two colors.

A lot of such estimates allowed the machine to set the color for each pixel separately, and such a reconstructed image turned out to be very close to what a person actually saw, although, of course, it did not completely coincide.

The general principle of decoding "visual thoughts" (details of this work can be found in an article in the journal Neuron.)

In order to significantly increase the resolution of such recognizable images, and at the same time learn how to read information about the color of pixels, it will take several more years of experiments.

But attractive pictures loom on the horizon.

We have already mentioned drawing pictures or design sketches with the power of thought alone (by the way, in the company of this area of application of the new technology, a mental composition of music suggests itself, experiments on which have been going on for a long time).

But that's not all. Doctors, for example, would willingly get "access" to the world of hallucinations of mentally ill people. How much simpler would the diagnosis and control of the treatment of the disease be if doctors could observe on a computer screen what their wards are seeing!

Dr. Kang Cheng from the Japanese Institute of Brain Research (RIKEN Brain Science Institute) predicts that the further development of this technology within 10 years will not only allow you to add color to the pictures, but in general – to move to literal mind reading "with a certain degree of accuracy."


Portal "Eternal youth" http://www.vechnayamolodost.ru/16.12.2008

Found a typo? Select it and press ctrl + enter Print version