Lots of companies have created hardware for AI chores. By simulating human brain now neuromorphic chips can speed up machine learning while reducing power requirements.
An increasing need for collection, analysis and decision-making from highly dynamic and unstructured natural data is driving demand for compute that may outpace both classic CPU and GPU architectures. To keep pace with the evolution of technology and to drive computing beyond PCs and servers, Intel has been working for the past six years on specialized architectures that can accelerate classic compute platforms.
As part of an effort, Intel has developed a first-of-its-kind self-learning neuromorphic chip – codenamed Loihi – that mimics how the brain functions by learning to operate based on various modes of feedback from the environment. This extremely energy-efficient chip, which uses the data to learn and make inferences, gets smarter over time and does not need to be trained in the traditional way.
Machine learning models such as deep learning have made tremendous recent advancements by using extensive training datasets to recognize objects and events. However, unless their training sets have specifically accounted for a particular element, situation or circumstance, these machine learning systems do not generalize well.
The potential benefits from self-learning chips are limitless. One example provides a person’s heartbeat reading under various conditions – after jogging, following a meal or before going to bed – to a neuromorphic chip that parses the data to determine a “normal” heartbeat. The system can then continuously monitor incoming heart data in order to flag patterns that do not match the “normal” pattern. This type of logic could also be applied to other use cases, like cybersecurity where an abnormality or difference in data streams could identify a breach or a hack since the system has learned the “normal” under various contexts.
Loihi – the neuromorphic chip
The Loihi research test chip includes digital circuits that mimic the brain’s basic mechanics, making machine learning faster and more efficient while requiring lower compute power. It offers highly flexible on-chip learning and combines training and inference on a single chip. This allows machines to be autonomous and to adapt in real time instead of waiting for the next update from the cloud. Researchers have demonstrated learning at a rate that is a 1 million times improvement compared with other typical spiking neural nets as measured by total operations to achieve a given accuracy when solving MNIST (large database of handwritten digits) digit recognition problems. Compared to technologies such as convolutional neural networks and deep learning neural networks, the Loihi test chip uses many fewer resources on the same task. Further, it is up to 1,000 times more energy-efficient than general purpose computing required for typical training systems.
The Loihi test chip’s features include:
- Fully asynchronous neuromorphic many core mesh that supports a wide range of sparse, hierarchical and recurrent neural network topologies with each neuron capable of communicating with thousands of other neurons.
- Each neuromorphic core includes a learning engine that can be programmed to adapt network parameters during operation, supporting supervised, unsupervised, reinforcement and other learning paradigms.
- Fabrication on Intel’s 14 nm process technology.
- A total of 130,000 neurons and 130 million synapses.
- Development and testing of several algorithms with high algorithmic efficiency for problems including path planning, constraint satisfaction, sparse coding, dictionary learning, and dynamic pattern learning and adaptation.
In the first half of 2018, the Loihi test chip will be shared with leading university and research institutions with a focus on advancing AI.