Bitte benutzen Sie diese Kennung, um auf die Ressource zu verweisen: http://dx.doi.org/10.25673/38473
Titel: Towards speech-based interactive post hoc explanations in explainable AI
Autor(en): Hillmann, Stefan
Möller, Sebastian
Michael, Thilo
Erscheinungsdatum: 2021
Art: Konferenzobjekt
Sprache: Englisch
URN: urn:nbn:de:gbv:ma9:1-1981185920-387197
Schlagwörter: XAI
Explainable AI
Post hoc explanations
Spoken dialog
Argumentation
Zusammenfassung: AI-based systems offer solutions for information extraction (e.g., finding information), information transformation (e.g., machine translation), classification (e.g., classifying news as fake or true), or decision support (e.g., providing diagnoses and treatment proposals for medical doctors) in many real-world applications. The solutions are based on machine leaning (ML) models and are commonly offered to a large and diverse group of users, some of them experts, many others naïve users from a large population. Nowadays and in particular deep neural network architectures are black-boxes for users and even developers [1, 4, 9] (also cp. [6]). A major goal of Explainable Artificial Intelligence (XAI) is making complex decision-making systems more trustworthy and accountable [7, p. 2]. That is why XAI seeks to ensure transparency, interpretability, and explainability [9]. Common to most users is that they are not able to understand the functioning of the AI-based systems, i.e., those are perceived as black-boxes. Humans are confronted with the results, but they cannot comprehend what information was used by the system for reaching this result (interpretability), and in which way this information was processed and weighted (transparency). The underlying reason is that an explicit functional description of the system is missing or even not possible in most Machine-Learning-(ML)-based AI systems – the function is trained by adjusting the internal parameters, and sometimes also the architecture is learned from a basic set of standard architectures. However, natural language and speech-based explanations allow better explainability [1, p. 11] due to interactive post hoc explanations in from of an informed dialog [7, p. 2]. Additionally, AI is also addressed by regulations, e.g., of the EU [2, 3], and thus becomes even more relevant for industry and research. Here, not at least recognition of bias in AI systems’ decision plays an important role.
URI: https://opendata.uni-halle.de//handle/1981185920/38719
http://dx.doi.org/10.25673/38473
Open-Access: Open-Access-Publikation
Nutzungslizenz: (CC BY-SA 4.0) Creative Commons Namensnennung - Weitergabe unter gleichen Bedingungen 4.0 International(CC BY-SA 4.0) Creative Commons Namensnennung - Weitergabe unter gleichen Bedingungen 4.0 International
Enthalten in den Sammlungen:Fakultät für Elektrotechnik und Informationstechnik (OA)

Dateien zu dieser Ressource:
Datei Beschreibung GrößeFormat 
AI-Debate2021_Hillmann et al._Final.pdfArtikel in Kongreßband435.25 kBAdobe PDFMiniaturbild
Öffnen/Anzeigen