TREND Edge Computing: Intelligence at the edges

| |
1 Star2 Stars3 Stars4 Stars5 Stars

Cloud-based AI offer an easy route to deploy artificial intelligence technology. But when it’s a question of milliseconds, decisions need to be made on the spot, quickly with no latency – at the edge.

Huge amounts of data and computing power have rapidly catapulted what is commonly called artificial intelligence from the “academic” fringe onto the covers of mainstream media. And no wonder, since almost all current megatrends – such as autonomous driving, Industry 4.0, IoT and anything beginning with “smart” – are dependent on the AI drip.

In particular, deep learning – probably the most exciting learning method in the world of machine learning – has given rise to a whole series of breakthroughs in the field of image and speech recognition. The massive amounts of data involved and the necessary computing power mean that a large proportion of this “machine intelligence” comes into being in remote data centers. The assistants created by Google, Apple and co. are also not really located within smartphones or loudspeakers, but on the servers of an arbitrary cloud somewhere in the world.

At the same time, Apple’s new iPhones also indicate where today’s AI journey is headed. Namely out of the cloud and into the devices. For example, the new A11 Bionic Chip in Apple’s iPhones 8, 8 Plus and X is a one-chip system (SoC) that has a special “neural engine” for machine learning alongside the usual CPU and GPU (graphics) cores. This means that the iPhone’s face-recognition feature no longer needs to bother the cloud.

The reasons for this trend – often also called Cognitive Edge Computing / Edge Analytics – are obvious. If Gartner’s estimate is correct, by 2020 there will be 20 billion IoT endpoints all dumping their data into the cloud and waiting for decisions: this will not only be extremely costly and lead to latency, but will also pose a threat to the network infrastructure. It follows that some things will have to happen “on the spot”. Of course the cloud will remain significant as a gigantic “learning and training center”, but sensing, inferring and acting will be done at the edges.

Data with expiration date

Edge Computing
IoT Drives Real-Time Data. (Image: IDC´s Data Age 2025 study, sponsored by Seagate).

The example of an Airbus A350 illustrates a further reason. The almost 6,000 sensors in the aircraft generate a daily data avalanche of 2.5 terabytes. Much of that data either loses its value in no time at all or calls for a real-time response. Would you like there to be a cloud involved? Or in autonomous cars, a fast machine shutdown in response to a fault, a defense against a cyber attack or financial fraud that takes only milliseconds to carry out?

Machine-learning models, such as anomaly detection using Kalman filters or Bayesian forecasting models can detect and avert approaching disasters in good time at the point where the data is collected. In autonomous cars, advanced driver assist systems with image processing, anomaly detection or reinforcement learning will ensure that the car reacts to its surroundings appropriately, safely and sufficiently rapidly.

Mini supercomputers for edge computing

But machine learning at the edges has its own limitations as regards processor power, energy consumption, cost and memory, even though companies such as Nvidia can use “shoeboxes” to house processor power that would quite recently have been classed as a supercomputer. The recently announced SoC (system on a chip) Xavier, with nine billion transistors in 350 mm², processes data from Lidar (light detection and ranging), radar, ultrasound and cameras for level 5 autonomy. Nvidia has announced 30 billion floating point operations per second (FLOPS) with a power consumption of 30 watts. Mass production will be up and running by the end of 2018. Although most traditional car manufacturers and automotive companies from other branches are lagging behind Nvidia, these low-power mini-supercomputers perform their tasks just as well in intelligent edge devices such as robots, drones, smart cameras, or portable medical equipment.

But Xavier also shows that there is no optimal AI processor for these applications. Nvidia’s next “autonomous” flagship will involve CPUs, GPUs and special deep-learning accelerators, but there are other solutions using DSPs or FPGAs. The trend, however, is for a mixture of accelerators, CPUs, GPUs and DSPs (digital signal processors). As far as form factor and power consumption are concerned, this combination seems to be currently in the lead.

This is also because these accelerators come with ever larger “multiplier accumulators” (MAC). These are optimized for arithmetical operations in which two factors are multiplied and the product added to a continuous summand (accumulator). Most digital signal processing and neural network computation is based on this type of operation. Conventional processors are not really adept at this.

Outlook

Edge computing with machine learning will play a key role in IoT architectures. The sheer volume of data collectors in IoT admits hardly any other solution. Even the cloud giants Amazon, Microsoft and Google have recognized this and now offer machine learning as a service, not only on their cloud platforms but also in their own “edge variants.”

Knowledge Base

Breaking New Frontiers in Robotics and Edge Computing with AI
https://www.youtube.com/watch?v=QisCRGmidJ4

Brandon Rohrer, Data Scientist/Facebook: How Deep Neural Networks Work
A gentle introduction to the principles behind neural networks, including backpropagation. https://www.youtube.com/watch?v=ILsA4nyG7I0

The End of Cloud Computing – Peter Levine
https://www.youtube.com/watch?v=l9tOd6fHR-U


 

Learn more about networked embedded systems and artificial intelligence at the electronica cyber physical systems conference (CPS).

 

 

Eyeriss (Image: MIT)

"Eyeriss" could enable mobile devices to run artificial-intelligence algorithms locally. (Image: MIT).