Please use this identifier to cite or link to this item: http://dx.doi.org/10.25673/86176
Full metadata record
DC FieldValueLanguage
dc.contributor.authorSpiller, Moritz-
dc.contributor.authorLiu, Ying-Hsang-
dc.contributor.authorHossain, Md Zakir-
dc.contributor.authorGedeon, Tom-
dc.contributor.authorKoltermann, Julia-
dc.contributor.authorNürnberger, Andreas-
dc.date.accessioned2022-06-13T11:55:08Z-
dc.date.available2022-06-13T11:55:08Z-
dc.date.issued2021-
dc.date.submitted2021-
dc.identifier.urihttps://opendata.uni-halle.de//handle/1981185920/88128-
dc.identifier.urihttp://dx.doi.org/10.25673/86176-
dc.description.abstractInformation visualizations are an efficient means to support the users in understanding large amounts of complex, interconnected data; user comprehension, however, depends on individual factors such as their cognitive abilities. The research literature provides evidence that user-adaptive information visualizations positively impact the users’ performance in visualization tasks. This study attempts to contribute toward the development of a computational model to predict the users’ success in visual search tasks from eye gaze data and thereby drive such user-adaptive systems. State-of-the-art deep learning models for time series classification have been trained on sequential eye gaze data obtained from 40 study participants’ interaction with a circular and an organizational graph. The results suggest that such models yield higher accuracy than a baseline classifier and previously used models for this purpose. In particular, a Multivariate Long Short Term Memory Fully Convolutional Network shows encouraging performance for its use in online user-adaptive systems. Given this finding, such a computational model can infer the users’ need for support during interaction with a graph and trigger appropriate interventions in user-adaptive information visualization systems. This facilitates the design of such systems since further interaction data like mouse clicks is not required.eng
dc.description.sponsorshipTransformationsvertrag-
dc.language.isoeng-
dc.relation.ispartofhttp://dl.acm.org/pub.cfm?id=J1341-
dc.rights.urihttps://creativecommons.org/licenses/by-sa/4.0/-
dc.subjectHuman-centered computingeng
dc.subjectUser studieseng
dc.subjectComputing methodologieseng
dc.subjectMachine learning approacheseng
dc.subjectEye trackingeng
dc.subjectTime series classificationeng
dc.subject.ddc610.72-
dc.titlePredicting visual search task success from eye gaze data as a basis for user-adaptive information visualization systemseng
dc.typeArticle-
dc.identifier.urnurn:nbn:de:gbv:ma9:1-1981185920-881283-
local.versionTypepublishedVersion-
local.bibliographicCitation.journaltitleACM transactions on interactive intelligent systems-
local.bibliographicCitation.volume11-
local.bibliographicCitation.issue2-
local.bibliographicCitation.pagestart1-
local.bibliographicCitation.pageend25-
local.bibliographicCitation.publishernameACM-
local.bibliographicCitation.publisherplaceNew York, NY-
local.bibliographicCitation.doi10.1145/3446638-
local.openaccesstrue-
dc.identifier.ppn1762517159-
local.bibliographicCitation.year2021-
cbs.sru.importDate2022-06-13T11:48:05Z-
local.bibliographicCitationEnthalten in ACM transactions on interactive intelligent systems - New York, NY : ACM, 2011-
local.accessrights.dnbfree-
Appears in Collections:Medizinische Fakultät (OA)

Files in This Item:
File Description SizeFormat 
Spiller et al._Predicting visual_2021.pdfZweitveröffentlichung3.28 MBAdobe PDFThumbnail
View/Open