Please use this identifier to cite or link to this item:
http://dx.doi.org/10.25673/38473
Full metadata record
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Hillmann, Stefan | - |
dc.contributor.author | Möller, Sebastian | - |
dc.contributor.author | Michael, Thilo | - |
dc.date.accessioned | 2021-09-29T12:43:07Z | - |
dc.date.available | 2021-09-29T12:43:07Z | - |
dc.date.issued | 2021 | - |
dc.date.submitted | 2021 | - |
dc.identifier.uri | https://opendata.uni-halle.de//handle/1981185920/38719 | - |
dc.identifier.uri | http://dx.doi.org/10.25673/38473 | - |
dc.description.abstract | AI-based systems offer solutions for information extraction (e.g., finding information), information transformation (e.g., machine translation), classification (e.g., classifying news as fake or true), or decision support (e.g., providing diagnoses and treatment proposals for medical doctors) in many real-world applications. The solutions are based on machine leaning (ML) models and are commonly offered to a large and diverse group of users, some of them experts, many others naïve users from a large population. Nowadays and in particular deep neural network architectures are black-boxes for users and even developers [1, 4, 9] (also cp. [6]). A major goal of Explainable Artificial Intelligence (XAI) is making complex decision-making systems more trustworthy and accountable [7, p. 2]. That is why XAI seeks to ensure transparency, interpretability, and explainability [9]. Common to most users is that they are not able to understand the functioning of the AI-based systems, i.e., those are perceived as black-boxes. Humans are confronted with the results, but they cannot comprehend what information was used by the system for reaching this result (interpretability), and in which way this information was processed and weighted (transparency). The underlying reason is that an explicit functional description of the system is missing or even not possible in most Machine-Learning-(ML)-based AI systems – the function is trained by adjusting the internal parameters, and sometimes also the architecture is learned from a basic set of standard architectures. However, natural language and speech-based explanations allow better explainability [1, p. 11] due to interactive post hoc explanations in from of an informed dialog [7, p. 2]. Additionally, AI is also addressed by regulations, e.g., of the EU [2, 3], and thus becomes even more relevant for industry and research. Here, not at least recognition of bias in AI systems’ decision plays an important role. | eng |
dc.language.iso | eng | - |
dc.relation.ispartof | https://opendata.uni-halle.de//handle/1981185920/38717 | - |
dc.relation.uri | https://opendata.uni-halle.de//handle/1981185920/38717 | - |
dc.rights.uri | https://creativecommons.org/licenses/by-sa/4.0/ | - |
dc.subject | XAI | eng |
dc.subject | Explainable AI | eng |
dc.subject | Post hoc explanations | eng |
dc.subject | Spoken dialog | eng |
dc.subject | Argumentation | eng |
dc.subject.ddc | 006.35 | - |
dc.title | Towards speech-based interactive post hoc explanations in explainable AI | eng |
dc.type | Conference Object | - |
dc.identifier.urn | urn:nbn:de:gbv:ma9:1-1981185920-387197 | - |
local.versionType | publishedVersion | - |
local.openaccess | true | - |
dc.identifier.ppn | 1771691174 | - |
local.bibliographicCitation.year | 2021 | - |
cbs.sru.importDate | 2021-09-23T06:48:13Z | - |
local.bibliographicCitation | Enthalten in 1st AI-DEbate Workshop, 2021 | - |
local.accessrights.dnb | free | - |
Appears in Collections: | Fakultät für Elektrotechnik und Informationstechnik (OA) |
Files in This Item:
File | Description | Size | Format | |
---|---|---|---|---|
AI-Debate2021_Hillmann et al._Final.pdf | Artikel in Kongreßband | 435.25 kB | Adobe PDF | View/Open |