KAN for Enhanced Explainability and Interpretability

Project One Liner: Generating Domain Context Grounded Natural Language Explanations for Healthcare Tasks

Status: ongoing

Project Theme: explainability

Project Areas: healthcare

Team: Gokul S Krishnan, Sowmya S Sundaram, Krithi Shailya, Venkatanathan K V, Ananya Ravi, Aditi Anand, Balaraman Ravindran

Short Description: Modern medical AI systems—including large language models (LLMs) and vision‑language models (VLMs)—have achieved remarkable diagnostic accuracy on tasks such as chest radiograph interpretation and clinical note analysis. Yet these models remain “black boxes,” often generating explanations that hallucinate or diverge from the actual input data. Such unfaithful rationales undermine clinician trust and impede real‑world adoption. Our work addresses this crucial gap by introducing a unified framework that leverages post‑hoc explainable AI (XAI) techniques to produce context‑grounded natural language explanations. By reintegrating the model‑derived importance signals into a second prompting phase, we aim to transform AI from an opaque decision engine into a transparent solution in patient care.