Explainable Artificial Intelligence: A peek inside the black box

| |
1 Star2 Stars3 Stars4 Stars5 Stars

Results provided by neural networks can have a massive impact on the people who use them. Therefore, they must be not only correct, but also traceable.

Former German chancellor Helmut Kohl once said, “What matters most is the outcome.” That sounds plausible at first and, setting aside the moral dimension, it works well in many different contexts. But in the field of science and technology in particular, being able to trace results is just as important as being able to reproduce them; these factors are greatly important for further developments in this particular area. If they are also regulating safety-critical applications, transparency is a must.

Neural networks are a typical example of this. By now they are able to be highly trained and provide startlingly accurate results. But for one thing, this isn’t always the case, and for another, it is often unclear how exactly the algorithms attain their “wisdom.”

Explainable Artificial Intelligence
Spectral Relevance Analysis (SpRAy), an extension of LRP technology, identifies and quantifies a wide spectrum of learned decision-making behavior. This also makes it possible to identify undesired decision making, even in very large datasets. (Image: Fraunhofer HHI).

Tests by researchers from the Fraunhofer Heinrich-Hertz Institute HHI and the Technical University of Berlin have shown that AI systems don’t always find useful solutions. For example, one well-known AI system classified pictures using the context. The result: every picture with a lot of water was labeled a ship. Even if the majority of pictures were correctly identified this way, you wouldn’t want to get into a self-driving car using this AI system.

Transparency with explainable artificial intelligence

In sensitive fields of application like medical diagnosis, or areas critical for safety, the problem-solving strategies used by AI systems need to be completely reliable. Until recently however, you couldn’t trace how they reached their decisions. But now Layer-Wise Relevance Propagation (LRP), an explainable AI method, is shedding light on the subject. Further developments in Spectral Relevance Analysis (SpRAy) allow a broad spectrum of learned behavior to be identified and quantified, even in large datasets.

In practice, LRP measures the influence each input variable has on the overall forecast and breaks down the decisions of the classifier. It recognizes how information is flowing through the network at every junction, meaning it can analyze even extremely deep neural networks.

 

 

 

Explainable AI (Image: Fraunhofer HH).

When is a train a train? (Image: Fraunhofer HH).