You don’t always need a hundred layers filled with several tens of thousands of neurons to seem “intelligent.” New approaches are producing some astonishing results with much lower numbers.
Neural networks have to be trained. You enter certain input into them and adjust the connections among the neurons in such a way that the desired output will “most probably” appear in the end. Time generally plays no role whatsoever in this process. Rather, the entire input is available at a particular time, and the output is immediately produced as a result.
By contrast, language recognition is time dependent, just like the control of motion sequences that have to react to altering environmental conditions. RNNs (recurrent neural networks) are designed for such tasks. They show time passages because nerve cells notice what has occurred in the past.
In a RNN model, an inalterable connection between two neighboring neurons generally determines the extent to which one neuron will influence the activity of the other one. Unlike the model of the Technical University of Vienna: The cell activity and the connections among the cells there change as time passes, something that creates a world of new possibilities.
The RNN worm
To demonstrate the multifaceted aspect of this new type of neural network, researchers “copied” the nervous system that the nematode C. elegans uses to initiate a very simple reflex – the way it withdraws when touched. It was then stimulated and trained to do real tasks.
With its 12 nerve cells, this simple network can perform remarkably complex tasks under real conditions. Things like parking a car. Instead of controlling the movement of the nematode in nature, the output “takes over” the vehicle’s steering wheel and gas pedal.
Of course, this does not mean that nematodes will take charge of the steering wheel in the future. But it does show that deep learning is much more capable than previously thought if the right architecture is used.
The method also provides insights into the “essence” of artificial intelligence – a real help to the continued development of the concept. As a rule, only the result can be analyzed in neural networks that frequently have thousands of nodes. Anything that happens in the “black box” remains in the black box.