TREND Explainable AI: Behind the scenes

| |
1 Star2 Stars3 Stars4 Stars5 Stars

Researchers at the German Research Center for Artificial Intelligence recently received the NVIDIA Pioneer Award for their What do Deep Networks Like to See? project, an innovative analytical technique that provides detailed insights into the properties and processing methods of deep neural networks. With their work, the scientists are very much in tune with current trends. For example, analysts from PricewaterhouseCoopers (PwC) predict in their Top 10 AI trends for 2018 that regulatory agencies and end users will increasingly demand a look into the black box of AI. But the problem of explainability (explainable AI) is just as old as artificial intelligence itself.

People have always hoped for a better understanding of smart machines. But lately a raft of new reasons has come along. Calls for the complete elimination of black-box algorithms in safety-critical applications such as autonomous driving and in public facilities for justice, healthcare or education are growing louder. And there are also regulatory requirements. For example, a “right to explanation” could become established with the new EU General Data Protection Regulation (GDPR). Ultimately, acceptance of new technologies by both society and business depends on their transparency. So there are plenty of reasons for putting explainable AI at the top of the agenda.

AI on drugs

With Deep Dream, in 2015 Google presented what may be the best-known and most impressive way to keep an eye on AI. Researchers trained a neural network with sample images as usual. But instead of using them to classify new images, they let the algorithm get “artistic.”

Explainable AI
After publication of the source code, a variety of Deep Dream generators followed. (Image: Wikipedia / MartinThoma CCO).

Grossly simplified, after a convolutional neural network (CNN) has been trained with a huge number of dog pictures, the network’s parameters self-adjust to yield the right result for this input data: “dog.” In the second step, an arbitrary image is fed in and the neural network is allowed to modify the image so that “dog” comes out as the result.

High activation rates in the upper layers of the network result in simpler patterns since these parts of the network are responsible for the simple structures. In the experiment three years ago, the more complex structures in the deeper layers caused Deep Dream to remind people of LSD trips and it quickly became a viral hit.

Explainable AI in reverse

Another “backward” method for understanding neural networks is layer-wise relevance propagation (LRP), developed by researchers at Fraunhofer HHI and TU Berlin in 2015.

Explainable AI (Image: Fraunhofer HHI)The method breaks down the individual steps in a process such as image recognition in reverse gear, so to speak. It considers which neurons were involved in which decisions and how much they contributed to the final result. It generates a so-called heat map (see demos) that shows which pixels have an especially strong influence on the image’s classification. In contrast to many other methods, this one is applicable universally and not only for image recognition.

An example illustrates how important such methods are for the use of data-driven learning algorithms, especially in life-support applications. The researchers found that of two “intelligences” that could identify horses, only one had really learned to recognize them based on their typical body shape. The other program focused mainly on the copyright symbols in the photos, which led to forums for horse lovers.

The latter example is surely unlikely to ally mistrust of systems using artificial intelligence. That’s why it’s important to develop methods that provide transparency and explainability, but also provability. Both regulators and end users will increasingly call for this look into the black box.

 


Learn about the use of AI in safety-critical applications such as autonomous driving at the 2018 electronica Automotive Conference.

 

 

 

Explainable AI (Image: pixabay/CC0).

The look inside is just as complicated as the inside itself. (Image: pixabay/CC0).