“Solid” neural networks

| |
1 Star2 Stars3 Stars4 Stars5 Stars

Artificial neural networks based on software function similarly to the real ones in our brains. But they need immense amounts of computing power to do so. Using exotic electronic components would be considerably more efficient.

The brain is insatiable. Although it only accounts for two percent of our body weight, it devours nearly one-fifth of its total energy consumption. And it only puts 5 percent to use in its principle activity, i.e. conscious thinking. Internal communication, which constantly runs on its own, burns up the rest.

Compared to computers, the human brain is surprisingly efficient. It performs about 10,000 million (1013) computing operations every second and consumes approximately 20 watts in the process. That is something supercomputers can only dream of. They currently consume 50 to 5,000 times more energy. However, researchers around the world are trying to reduce that gap using artificial neural networks. Although most of their attempts involve the use of software.

Neural networks with memristors

Now French researchers have cast this approach in hardware. They have developed artificial synapses which, like the brain, are able to change the “strength” of the connections between the nerve cells (neurons), depending on their use. That makes it possible to weaken or strengthen connections and respond appropriately to changes in the environment.

The artificial synapses of the French research organization CNRS and the Thales Group are so-called memristors, an invented word that combines “memory” and “resistor”. The current resistance of the nano-thin, ferroelectric layers between two electrodes is determined by the direction and quantity of charges that have already flowed through it. A lower electric resistance means a stronger “neural” connection, and vice-versa.

The approach itself is nothing new. However, researchers have now managed to develop a physical model that allows forecasts about the learning process of artificial synapses. That is an important prerequisite for energy-efficient implementation in future computers.

Electrochemical synapses

Just recently, researchers from Stanford University and Sandia National Laboratories also introduced an artificial version of synapses. Their electrochemical module from semiconducting plastics is reminiscent of a typical battery design featuring two external layers as “poles” and an electrolyte in between. Low, positive switching voltages cause a current of positively charged ions. At the same time, up to five hundred different charge states form in the plastic layers that each cause a non-volatile change in conductivity. Nega­tive voltage pulses reverse the process.

This electrochemical process is very reminiscent of the natural way that signals are sent via synapses in our brains. They also connect the channels between two nerve cells in which a current of ions is responsible for passing on electric signals.

Unlike the French, researchers at Stanford feel that their “battery approach” could, in the future, even result in neural modules that could serve as interfaces between computers and the brain or neuromorphic computers. But that future is likely a distant one.


Knowledge base

Article: “Learning through ferroelectric domain dynamics in solid-state synapses“.






Neural networks (Photo: Thales Group).

Ferroelectric nano-synapses learn to recognize patterns in a predictable way. (Photo: Thales Group).