Bitte benutzen Sie diese Kennung, um auf die Ressource zu verweisen: http://dx.doi.org/10.25673/38475
Langanzeige der Metadaten
DC ElementWertSprache
dc.contributor.authorDresvyanskiy, Denis-
dc.contributor.authorSiegert, Ingo-
dc.contributor.authorKarpov, Alexei-
dc.contributor.authorMinker, Wolfgang-
dc.date.accessioned2021-09-29T12:58:20Z-
dc.date.available2021-09-29T12:58:20Z-
dc.date.issued2021-
dc.date.submitted2021-
dc.identifier.urihttps://opendata.uni-halle.de//handle/1981185920/38721-
dc.identifier.urihttp://dx.doi.org/10.25673/38475-
dc.description.abstractINTRODUCTION Utilizing dialogue assistants endowed with weak artificial intelligence has become a common technology, which is widespread across many industrial spheres - from operating robots using voice to speaking with an intelligent bot by telephone. However, such systems are still far from being essentially intelligent systems, since they cannot fully mimicry or replace humans during human-computer interaction (HCI). Nowadays, paralinguistic analyses is becoming one of the most important parts of HCI, because current requirements to such systems have been increased due to sharped improvement of speech-recognition systems: now, the HCI system should not only recognize, what the user is talking about, but also how he/she is talking, and which intention/state does he/she have now. Those include analyzing and evaluating such high-level features of dialogue as stress, emotions, engagement, and many others. Although there have been a lot of studies in paralinguistics devoted to recognizing high-level features (such as emotions[1] and stress[17, 25]) using audio cues, there are still almost no insights on how it could work for engagement.eng
dc.language.isoeng-
dc.relation.ispartofhttps://opendata.uni-halle.de//handle/1981185920/38717-
dc.relation.urihttps://opendata.uni-halle.de//handle/1981185920/38717-
dc.rights.urihttps://creativecommons.org/licenses/by-sa/4.0/-
dc.subjectParalinguisticseng
dc.subjectHuman-computer interactioneng
dc.subjectEngagement recognitioneng
dc.subjectAudio processingeng
dc.subject.ddc006.35-
dc.titleEngagement recognition using audio channel onlyeng
dc.typeConference Object-
dc.identifier.urnurn:nbn:de:gbv:ma9:1-1981185920-387212-
local.versionTypepublishedVersion-
local.openaccesstrue-
dc.identifier.ppn1771708050-
local.bibliographicCitation.year2021-
cbs.sru.importDate2021-09-23T09:22:16Z-
local.bibliographicCitationEnthalten in 1st AI-DEbate Workshop, 2021-
local.accessrights.dnbfree-
Enthalten in den Sammlungen:Fakultät für Elektrotechnik und Informationstechnik (OA)

Dateien zu dieser Ressource:
Datei Beschreibung GrößeFormat 
AI-Debate2021_Dresvyanskiy et al._Final.pdfArtikel in Kongreßband484.63 kBAdobe PDFMiniaturbild
Öffnen/Anzeigen