21th AIAI 2025, 26 - 29 June 2025, Limassol, Cyprus

Merged LIME and SHAP eXplanation (MLSX): BERT case in NER Task

Hedhili Aroua, ben tiba yasmine

Abstract:

  Named Entity Recognition (NER) is a critical task in natural language processing (NLP) that involves identifying and classifying entities such as people, organizations, and locations within a text. While pre-trained models, such as Bidirectional Encoder Representations from Transformers (BERT), have achieved state-of-the-art performance in NER tasks, their inner workings remain opaque owing to the complexity of the model’s architecture. This lack of interpretability raises concerns, particularly in domains that require transparency such as health- care and legal applications. Explainable AI (XAI) techniques can be leveraged to provide both local and global explanations of the model behavior. In this study, we combined local and global techniques, such as LIME and SHAP, to explore how BERT makes decisions in NER tasks. Our results demonstrated the importance of integrating local and global explanations, offering a comprehensive view that builds trust, ensures accountability, and provides actionable insights. This work highlights the need to balance high performance with interpretability, especially in high-stake environments where transparency is essential.  

*** Title, author list and abstract as submitted during Camera-Ready version delivery. Small changes that may have occurred during processing by Springer may not appear in this window.