Explainable Artificial Intelligence (XAI) models aim to make the underlying inference mechanism of AI systems transparent and interpretable to human users. Humans can easily be overwhelmed with too many or too-detailed explanations; an interactive communication process can help in understanding the user and identify user-specific content that may need an explanation, says Song-Chun Zhu, the project’s principal investigator and a professor of Statistics and Computer Science. So Zhu and his team set out to improve existing XAI models – to pose an explanation generator as an iterative process of communication between the human and the machine.

Arjun Reddy Akula, Ph.D. candidate at UCLA who led this work, said, “In our proposed framework, we let the machine and the user solve a collaborative task, but the machine’s mind and the human user’s mind only have partial knowledge of the environment. Hence, the machine and user need to communicate with each other, using their partial knowledge, otherwise they would not be able to optimally solve the collaborative task.”

Arjun Reddy Akula also said, “Understanding and developing human trust in AI systems remains a significant challenge as they cannot explain why they reached a specific recommendation or a decision. This is especially problematic in high-risk environments such as banking, healthcare, and insurance, where AI decisions can have significant consequences. Our work will make it easier for both expert and non-expert AI human users to operate, understand, and trust the AI system’s recommendations.” 

 

This work has been published in the prestigious iScience journal and can be accessed here: https://www.cell.com/iscience/pdf/S2589-0042(21)01551-0.pdf