Deep rule visualizer : a novel explainable deep neural network visualization architecture
Loading...
Date
2023
Authors
Journal Title
Journal ISSN
Volume Title
Publisher
Abstract
This thesis explores the development and implementation of the Deep Rule Visualizer, an architecture designed to enhance the interpretability and explainability of deep neural networks (DNNs). The goal of this research is to fill the gap in interpretation and enabling effective debugging of incorrect predictions. The thesis begins by introducing the proposed architecture and its components, including the use of xDNN models, saliency maps, and IF ... THEN rules. These components are integrated to offer users a deeper understanding of how the DNN arrives at its predictions. The evaluation was done on three benchmark datasets. Qualitative analysis focuses on the user interface of the Deep Rule Visualizer, which provides visual representations of rule images, regions of interest, and a novel Deep Rule Score. The interface enables users to analyze associations between input images and predictions, enhancing interpretability and facilitating insights into the model's performance. Quantitative evaluation is conducted to assess the effectiveness of the Deep Rule Visualizer. The Deep Rule Score, which combines the LIME score and distances to prototype images, serves as a comprehensive evaluation metric. Results demonstrate that higher Deep Rule Scores align with correct predictions, while lower scores indicate incorrect predictions across all three datasets. This consistency reinforces the effectiveness of the Deep Rule Score as an indicator of prediction accuracy. The Deep Rule Visualizer has the potential to enhance trust in deep learning models and aid in identifying biases and errors. Furthermore, the architecture can be applied in various domains, including healthcare and finance, to provide transparency and understanding in decision-making processes. Several avenues for further research are proposed, such as enhancing rule generation techniques, integrating additional visualization methods, refining the user interface through user studies, extending the architecture to other domains, and addressing the ethical considerations of using explainable AI.
Description
Citation
Rathnayake, B.L.K. (2023). Deep rule visualizer : a novel explainable deep neural network visualization architecture [Master’s theses, University of Moratuwa]. , University of Moratuwa]. Institutional Repository University of Moratuwa. https://dl.lib.uom.lk/handle/123/23988
