! -- Paper: 83 -->
|One of the aim of Explainable Artificial Intelligence (XAI) is to equip data-driven, machine-learned models with a high degree of explainability for humans. Understanding and explaining the inferences of a model can be seen as a defeasible reasoning process. This process is likely to be non-monotonic: a conclusion, linked to a set of premises, can be retracted when new information becomes available. In formal logic, computational argumentation is a method, within Artificial Intelligence (AI), focused on modeling defeasible reasoning. This research study focuses on the automatic formation of an argument-based representation for a machine-learned model in order to enhance its degree of explainability, by employing principles and techniques from computational argumentation. It also contributes to the body of knowledge by introducing a novel quantitative human-centred technique to evaluate such a novel representation, and potentially other XAI methods, in the form of a questionnaire for explainability. An experiment have been conducted with two groups of human participants, one interacting with the argument-based representation, and one with a decision trees, a representation deemed naturally transparent and comprehensible. Findings demonstrate that the explainability of the original argument-based representation is statistically similar to that associated to the decision-trees, as reported by humans via the novel questionnaire.|
*** Title, author list and abstract as seen in the Camera-Ready version of the paper that was provided to Conference Committee. Small changes that may have occurred during processing by Springer may not appear in this window.