Explainable AI for Web and Text Mining

Authors

  • Vidya Arnav; LovnishVerma; Anita Budhiraja; Sarwan Singh

Abstract

This study introduces an Explainable AI (XAI) framework specifically developed for web and text mining applications, addressing the critical challenges of transparency and understandiblity in AI systems. While advanced Artificial Intelligence models, particularly deep learning architectures, excel in predictive capabilities, their "black-box" nature often hinders trust, accountability, and regulatory compliance. The proposed framework bridges this gap by integrating interpretable models with post-hoc clarification methods such as Local Interpretable Modell-agonistic Explanation (LIME) and SHapley Additive exPlanations (SHAP). It also incorporates interactive visualization tools to elucidate outputs like sentiment analysis, topic modeling, and keyword significance, empowering stakeholders to validate and refine AI-driven insights effectively. Through case studies in domains such as healthcare, e-commerce, and legal services, the framework demonstrates its adaptability and practical utility in enhancing user trust and promoting ethical AI practices. Experimental results reveal its ability to balance interpretability with performance, ensuring usability across diverse applications while addressing challenges like scalability and domain-specific explanations. This research advances the field of XAI by providing a structured, transparent, and adaptable solution for web and text mining tasks. Future work will focus on optimizing scalability, tailoring explanations for specific industries, and integrating ethical considerations such as bias mitigation to ensure the responsible deployment of AI systems.

Downloads

Published

2026-01-03

Issue

Section

Articles