06/07/21 – Séminaire : Francesco Ventura (Politecnico di Torino)

Séminaire/congrès/conférence

Explaining black-box deep neural models’ through the unsupervised mining of their inner knowledge

06/07/21, 11:00, en ligne via Teams

Artificial Intelligence applications are experiencing a disruptive expansion in many relevant aspects of our life thanks to the development of even better performing Deep Neural Networks (DNNs).
However, along with higher performance, AI models are characterized by high complexity and opaqueness, i.e., they do not allow understanding the reasoning behind their automatic decision-making process.
This widely limits their applicability and opens to a wide range of problems in many sensible contexts, e.g. health, transportation, security, and law.
From one side, it is very hard to interpret the decision-making process of AI models both at the local and global levels.
Furthermore, it is even harder to assess the reliability of their predictions over time, e.g. because of the presence of concept drift.
Ignoring even one of these aspects may have very harsh consequences in real-life settings where it is supposed that users, both expert, and non-expert, should trust the decisions taken by « smart » platforms and devices.
In the literature, these challenges are faced separately and often without taking into account the latent knowledge contained in the deeper layers of the DNNs.
Instead, we claim that these issues are part of the same big challenge: the Model Reliability Management.
For these reasons, we aim to address these challenges by proposing (i) a unified model-aware strategy for the explanation of deep neural networks at prediction-local and model-global levels, and (ii) a unified model-aware assessment framework for the management of models’ performance degradation over time.
We show the quality of our explanations thanks to online surveys demonstrating that users consider our local explanations more interpretable than the current state-of-the-art in 75% of the cases.
Furthermore, we show that our concept drift management framework is able to detect the presence of drift already when just 10% of data is drifting in the window of analysis and that it can easily scale horizontally.

Laisser un commentaire

Votre adresse de messagerie ne sera pas publiée. Les champs obligatoires sont indiqués avec *