Explainable AI vs Explaining AI — Part 1

Explainable AI vs Explaining AI — Part 1Is deep learning deep enough?ahmad haj mosaBlockedUnblockFollowFollowingDec 21, 2018Despite the recent remarkable results of deep learning (DL), there is always a risk that it produces delusional and unrealistic results due to several reasons such as under-fitting, over-fitting or incomplete training data.

For example the famous Move 78 of the professional Go player Lee Sedol which caused a delusional behavioral of Alpha Go, adversarial attacks and the erroneous behavior that DeepXplore found in Nvidia DAVE-2 self-driving car platform, in which the system has decided to make two different decisions for the same input image, where the only difference is the brightness level.

Such examples push AI researcher to focus more on opening the black-box of deep learning and avoid relying only on the accuracy of the model.

Some time ago, I was preparing a workshop about the state of AI for non-AI professions.

Since AI is currently a buzzword and buzzwords usually come with a lot of definitions, I had to select one.

The definition of artificial is clear but what about intelligence.

I searched and found the most reasonable and related to AI definition by Marvin Minskey:Our model contains processes that enable us to solve problems we consider difficult.

“black-box” is our name of those processes we don’t yet understand.

Actually, this is a modified version of the definition, where I replaced “intelligence” with “black-box” and “mind” with “model.

” Although I have changed it, the definition still holds.

These arguments tell us these things: a) black-box and intelligence are the same, b) complex problems require complex solutions, c) a simple, understandable process most probably is not suitable for complex problems, such us Autonomous Vehicles and Machine Translation and d) the darker the box is, the more intelligent it gets.

So when we ask deep learning scientists to open the black-box, this could imply limiting the model capability.

But does it?It is correct that most of the processes in the human mind are ambiguous and we don’t understand them, but still, we can explain our decisions and thoughts.

Our explanations usually consist of statements and reasons.

Our explanations don’t involve statistical inference analysis ( unless it is related to the topic).

So the question is: how do we explain or decisions?The human mind consists of two different systems the brain uses to form thoughts.

These two systems are:System 1: is a fast, intuitive, unconscious and emotional, stereotypical, automatic and uses similarity with past experience to get a decisionSystem 2: is a slow, conscious, logical, effortful and uses high-level reasoning to get decisionsTwo systems of thinkingSystem 1 makes automatic decisions that don’t need a lot of concentration and common knowledge, like walking, holding objects, understanding simple sentence or driving a car on highways.

System 2 does a high-level reasoning, that requires common knowledge, like understanding law clauses or driving a car inside crowded cities, where it doesn’t only need knowledge about driving between the lanes, but it also requires knowledge about the traffic rules inside cities and the human factors.

So the question is then: what does System 1 and System 2 represent in AI?In 1988, Marvin Minskey published his book The Society of Mind.

One of the most interesting piece of this book is the Framework of Representing Knowledge in the human brain.

Framework for representing knowledgeThe knowledge in the brain consists of seven layers (ignoring micronemes, the input layer).

Considering the recent AI technologies, I would explain their functionality as follows:Neural Network: It represents ANN/DL.

The main objective of this layer is to avoid the curse of dimensionality and build a high level distributed/disentangled representation.

This layer represents the most intuitive part (System 1) of the brain.

It is stereotypical and automatic.

It is slow and hard in learning and hard in explainingK-lines and K-trees: Marvin Minskey looks at K-lines as memory lines.

But I propose that the K-lines layer represents what is called Inductive Programming.

A sequence/lines of programs that are learned to solve certain problems.

This layer is the first bridge between System 1 and 2.

It is more logical and uses more reasoning than NN layer.

It is easier to learn and explain.

Semantic Networks: It represents knowledge graphs, semantic net or ontologies.

It is again more into the direction of System 2.

One shot learning here is easier by disconnecting or connecting two facts (nodes).

Frames: A frame is a data structure with typical knowledge about a particular object or concept.

It is same as layer 3 but with more explainability and easier to learnCommon Knowledge Lines: Here I merged all the last three layers with one layer.

The objective of these layers is to connect different domains together to form common sense and common knowledge, i.

e.

, driving a car inside cities.

It is same as layer 4 but with more explainability and easier to learnConsidering the points above, We get the following conclusion:Framework of Explanation Deep LearningWhat we call deep learning is actually not deep enough to be explainableWhat we call deep learning is actually not deep enough to perform one-shot learningThe key to achieve explainable AI is by closing the gaps between the four layers of knowledgeBy closing these gaps, we achieve an AI that can explain itself using System 2, while it maintains it complex solutions in the intuitive System 1By closing these gaps, we achieve an AI that can solve intuitively experienced problems using System 1, and can generalize and solve reasoning problems using System 2The goal of Explaining AI is not only to build the trust, but also to increase the performance and to achieve AI general intelligenceIn the next parts of this series of articles, I will dive deeply into the state of the art technologies that aims to close the gaps in the above framework.

Starting from explaining System 1 decisions using LIME or Shap, going through recent neural-symbolic research and inductive logic programming, addressing consciousnesses priors by Yoshua Bengio and finally how to build an explaining AI model instead of explaining an AI model.

Stay tuned!.

. More details

Leave a Reply