ahmad.haj.mosa@pwc.com
LinkedIn: ahmad-haj-mosa
schneider.fabian@pwc.com
LinkedIn: fabian-schneider-24122379
| Interpretation | Explanation |
|---|---|
| Between AI and its designer | Between AI and its consumer |
| External analytics | Learned by the model |
| Model specific or agnostic | Model specific |
| Local or global | local |
| examples: LIME and TSNE | examples: Attention Mechanism |
source: USPAS
source: “Why Should I Trust You?” Explaining the Predictions of Any Classifier
source: DARPA
Explainability
Prediction Accuracy
Neural Nets
Deep
Learning
Statistical
Models
AOGs
SVMs
Graphical
Models
Bayesian
Belief Nets
SRL
MLNs
Markov
Models
Decision
Trees
Ensemble
Methods
Random
Forests
prefrontal cortex
parietal lobe
parietal lobe
Top-down attention
Bottom-up attention
Attention allows us to focus on few elements out of a large set
Attention focus on a few appropriate abstract or concrete elements of mental representation
source: Yoshua Bingo
Marvin Minsky
Knowledge
Representation
Daniel Kahneman
Thinking fast and slow
Yoshua Bengio
Disetangled
Representation
Donald O. Hebb
Hebbian Learning
| System 1 | System 2 |
|---|---|
| drive a car on highways | drive a car in cities |
| come up with a good chess move (if you're a chess master) | point your attention towards the clowns at the circus |
| understands simple sentences | understands law clauses |
| correlation | causation |
| hard to explain | easy to explain |
source: Thinking fast and slow by Daniel Kahneman
source: The Emotion Machine, Marvin Minsky
thinking fast
thinking slow
consciousness prior
learning slow
learning fast
hard explanation
easy
explanation
Representation model
Decision model
decision Model
Representation model
source: arXiv:1810.02338: Neural-Symbolic VQA: Disentangling Reasoning from Vision and Language Understanding
Mask segmentation
De-rendering
Program execution
Question parsing
program generation
Text
"Any two cells or systems of cells that are repeatedly active at the same time will tend to become 'associated', so that activity in one facilitates activity in the other."
Neuron A
Neuron A
Neuron B
Neuron B
Neuron A
Neuron B
Time
Text
"When one cell repeatedly assists in firing another, the axon of the first cell develops synaptic knobs (or enlarges them if they already exist) in contact with the soma of the second cell."
Neuron A
Neuron A
Neuron B
Neuron B
Neuron A
Neuron B
Time
Input 1
Program 1
Program 5
Program 2
Program 4
Program 7
Program 8
Program 6
Program 3
Program 9
Output 2
Output 1
Input 1
Program 1
Program 5
Program 2
Program 4
Program 7
Program 6
Program 8
Program 9
Program 3
Output 1
Output 2
Input 1
Program 1
Program 5
Program 2
Program 4
Program 7
Program 6
Program 8
Program 9
Program 3
Output 1
Output 2
Input 1
Program 1
Program 5
Program 2
Program 4
Program 7
Program 6
Program 8
Program 9
Program 3
Output 1
Output 2
Input 1
Program 1
Program 5
Program 2
Program 4
Program 7
Program 6
Program 8
Program 9
Program 3
Output 1
Output 2
modular
interpretable
parallelizable
fully deterministic (after graph generation)
[ not ( A || C ) || ( C && not ( B && E ) ) , f7 ( f1 ( A ) ) , not ( f8 ( f7 ( f1 ( A ) ) ) ) , not ( A || B ) || implication( C, D ) , ... ]
--List of Parameters for Node Function f (x, y):
[ 0, 1, 2 ]
--Parameter Permutation:
[ (0,1), (1,2), (2,3), (3,2), (2,1), (1,0) ]Text
--List of Parameters for Node Function f (x, y):
[ 0, 1, 2 ]
--Parameter Permutation:
[ (0,1), (1,2), (2,3), (3,2), (2,1), (1,0) ]
--Parameter Permutation with commutative function f (x, y):
[ (0,1), (1,2), (2,3) ](using ingenuity instead of computational power)
(.-) [[a] -> [b]] -> ([[b]] -> [a] -> [c]) -> [a] -> [c] (.-) funcList newFunc inp = newFunc (map ($inp) funcList) inp -- where -- funcList = Previous Node Function List -- newFunc = Node Function -- inp = Sample Input
~Curry-Howard-Lambek Isomorphism
a proof is a program, and the formula it proves is the type for the program
- Accuracy is not enough to trust an AI
- Accuracy vs Explainability is not a trade-off
Look for solutions in weird places:
- Try Functional Programming & Category Theory
Trends in XAI:
- closing the gap between Symbolic AI and DL
- training DL representation models not decision models
- using Object-Oriented-Learning
- using Computational Graph Networks
+ Don't let your robot read legal texts ;)
Interested in more AI projects?
Fabian Schneider
PoC-Engineer & Researcher
schneider.fabian@pwc.com
Ahmad Haj Mosa
AI Researcher
ahmad.haj.mosa@pwc.com