Machine learning (ML) models can be an effective assistance in medical diagnosis if they allow physicians to project their knowledge into model's internal mechanism. Using model-agnostic explanatory interactive ML (XIML), physicians iteratively train a ML model and revise its decision-making mechanism depicted as local explanation.Counterexamples serve as additional training data and statistically outweigh the human feedback. Unfortunately, counterexamples alone do not guarantee that the feedback persists in subsequent optimization iterations --a form of catastrophic forgetting in XIML, which might cause serious consequences in sensitive domains such as medical diagnosis. To overcome this issue, we propose a hybrid approach: HYXIML collects the physicians' feedback, learns a set of probabilistic logical rules, and substitutes the predictions for closely related instances by logical inferences. We show that the connection of XIML and probabilistic logic enhances the explanatory performance whilst retaining a stable predictive performance. |
*** Title, author list and abstract as seen in the Camera-Ready version of the paper that was provided to Conference Committee. Small changes that may have occurred during processing by Springer may not appear in this window.