Please use this identifier to cite or link to this item:
http://dx.doi.org/10.25673/38475
Full metadata record
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Dresvyanskiy, Denis | - |
dc.contributor.author | Siegert, Ingo | - |
dc.contributor.author | Karpov, Alexei | - |
dc.contributor.author | Minker, Wolfgang | - |
dc.date.accessioned | 2021-09-29T12:58:20Z | - |
dc.date.available | 2021-09-29T12:58:20Z | - |
dc.date.issued | 2021 | - |
dc.date.submitted | 2021 | - |
dc.identifier.uri | https://opendata.uni-halle.de//handle/1981185920/38721 | - |
dc.identifier.uri | http://dx.doi.org/10.25673/38475 | - |
dc.description.abstract | INTRODUCTION Utilizing dialogue assistants endowed with weak artificial intelligence has become a common technology, which is widespread across many industrial spheres - from operating robots using voice to speaking with an intelligent bot by telephone. However, such systems are still far from being essentially intelligent systems, since they cannot fully mimicry or replace humans during human-computer interaction (HCI). Nowadays, paralinguistic analyses is becoming one of the most important parts of HCI, because current requirements to such systems have been increased due to sharped improvement of speech-recognition systems: now, the HCI system should not only recognize, what the user is talking about, but also how he/she is talking, and which intention/state does he/she have now. Those include analyzing and evaluating such high-level features of dialogue as stress, emotions, engagement, and many others. Although there have been a lot of studies in paralinguistics devoted to recognizing high-level features (such as emotions[1] and stress[17, 25]) using audio cues, there are still almost no insights on how it could work for engagement. | eng |
dc.language.iso | eng | - |
dc.relation.ispartof | https://opendata.uni-halle.de//handle/1981185920/38717 | - |
dc.relation.uri | https://opendata.uni-halle.de//handle/1981185920/38717 | - |
dc.rights.uri | https://creativecommons.org/licenses/by-sa/4.0/ | - |
dc.subject | Paralinguistics | eng |
dc.subject | Human-computer interaction | eng |
dc.subject | Engagement recognition | eng |
dc.subject | Audio processing | eng |
dc.subject.ddc | 006.35 | - |
dc.title | Engagement recognition using audio channel only | eng |
dc.type | Conference Object | - |
dc.identifier.urn | urn:nbn:de:gbv:ma9:1-1981185920-387212 | - |
local.versionType | publishedVersion | - |
local.openaccess | true | - |
dc.identifier.ppn | 1771708050 | - |
local.bibliographicCitation.year | 2021 | - |
cbs.sru.importDate | 2021-09-23T09:22:16Z | - |
local.bibliographicCitation | Enthalten in 1st AI-DEbate Workshop, 2021 | - |
local.accessrights.dnb | free | - |
Appears in Collections: | Fakultät für Elektrotechnik und Informationstechnik (OA) |
Files in This Item:
File | Description | Size | Format | |
---|---|---|---|---|
AI-Debate2021_Dresvyanskiy et al._Final.pdf | Artikel in Kongreßband | 484.63 kB | Adobe PDF | View/Open |