Our former student Nina Schaaf, published the results of her master-thesis at the 18th International Conference on Machine Learning and Applications - ICMLA 2019, Boca Raton, Florida. The thesis has been jointly supervised by Professor Marco Huber and Professor Johannes Maucher. Here is the paper’s abstract:
One obstacle that so far prevents the introduction of machine learning models primarily in critical areas is the lack of explainability. In this work, a practicable approach of gaining explainability of deep artificial neural networks (NN) using an interpretable surrogate model based on decision trees is presented. Simply fitting a decision tree to a trained NN usually leads to unsatisfactory results in terms of accuracy and fidelity. Using L1-orthogonal regularization during training, however, preserves the accuracy of the NN, while it can be closely approximated by small decision trees. Tests with different data sets confirm that L1-orthogonal regularization yields models of lower complexity and at the same time higher fidelity compared to other regularizers.