21th AIAI 2025, 26 - 29 June 2025, Limassol, Cyprus

Enhancing Explainability in AI-Powered Data Retrieval Systems

Antony Seabra, Claudio Cavalcante, Sergio Lifschitz

Abstract:

  Explainability is a key aspect of data retrieval systems, particularly when leveraging Large Language Models (LLMs) to build question answer systems. This study proposes a methodology to enhance explainability in retrieval processes, ensuring that results are not only accurate but also interpretable. By integrating Knowledge Graphs (KGs) with dynamic prompt engineering, we systematically guide LLMs to generate transparent and contextually justifications for their outputs, making retrieval decisions more comprehensible to users. To evaluate the effectiveness of this method, we implement it within a Recommender System and assess its impact on user trust and decision-making. Experimental results demonstrate that our method significantly improves the interpretability of retrieval outcomes while maintaining high retrieval performance—all without requiring model retraining. This work highlights the potential of combining structured knowledge representations with prompt engineering to bridge the gap between AI performance and user-centric explainability.  

*** Title, author list and abstract as submitted during Camera-Ready version delivery. Small changes that may have occurred during processing by Springer may not appear in this window.