RATE meeting

July 2nd

Previous meeting

Formalization

A Machine Learning explanation \(E\) is an answer to the question:

Why are (instances of) \(\mathcal{X}_E\) classified as \(y\)?

where \(\mathcal{X}_E \subseteq \mathcal{X}\) and \(y \in \mathcal{Y}\).

Philosophical models

(Deductive-Nomological, Inductive-Statistical, Statistical-Relevance models)

Conclusion
Put on the stack, work towards a practical application.

Question: how to formally define an explanation?

Overview

Question: how to visualize the trade-off between explanation desiderata?

  • Triangle
  • Metrics
  • Two alternatives

 

  • RATE meeting       

Triangle

Metrics

Generic: \(\frac{|(y \in y_{expl}\ |\ y = NULL)|}{|y_{expl}|} \)

with \(y_{expl}\) the predicted classes by the explanation.

Simple: \(1 - \frac{N_{expl}}{N_{ref}} \)

with \(N\) the number of non-leaf nodes in the decision tree.

Accurate: \(\frac{\Sigma TP\ +\ \Sigma TN}{|(y \in y_{expl}\ |\ y \neq NULL)|} \)

but normalized per class. Only explained instances!

Problem

Deck July 2nd

By iamdecode

Deck July 2nd

  • 26