Countering Creative Information Manipulation with Explainable AI (CIMPLE)
Type: International Project Project
Duration: from 2021 Apr 01 to 2024 Mar 31
Financed by: FCT
Prime Contractor: EUROCOM (Other) - Paris, France
Explainability is of significant importance in the move towards trusted, responsible and ethical AI, yet remains in infancy. Most relevant efforts focus on the increased transparency of AI model design and training data, and on statistics-based interpretations of resulting decisions (interpretability). Explainability considers how AI can be understood by human users. The understandability of such explanations and their suitability to particular users and application domains received very little attention so far. Hence there is a need for an interdisciplinary and drastic evolution in XAI methods. CIMPLE will draw on models of human creativity, both in manipulating and understanding information, to design more understandable, reconfigurable and personalisable explanations. Human factors are key determinants of the success of relevant AI models. In some contexts, such as misinformation detection, existing XAI technical explainability methods do not suffice as the complexity of the domain and the variety of relevant social and psychological factors can heavily influence users’ trust in derived explanations. Past research has shown that presenting users with true / false credibility decisions is inadequate and ineffective, particularly when a black-box algorithm is used. Knowledge Graphs offer significant potential to better structure the core of AI models, using semantic representations when producing explanations for their decisions. By capturing the context and application domain in a granular manner, such graphs offer a much needed semantic layer that is currently missing from typical brute-force machine learning approaches. To this end, CIMPLE aims to experiment with innovative social and knowledgedriven AI explanations, and to use computational creativity techniques to generate powerful, engaging, and easily and quickly understandable explanations of rather complex AI decisions and behaviour. These explanations will be tested in the domain of detection and tracking of manipulated information, taking into account social, psychological and technical explainability needs and requirements.
Partnerships
- EUROCOM (Other) - Paris, France
- R - INESC-ID Lisboa (Other) - Lisboa, Portugal
- The Open University (University) - UK
- University of Economics and Business (University) - Prague, Czech Republic
- WebLyzard technology, WLT (Company) - Vienna, Austria