Please use this identifier to cite or link to this item: http://dx.doi.org/10.25673/85766
Title: Automatic vs. human recognition of pain intensity from facial expression on the X-ITE pain database
Author(s): Othman, Ehsan
Werner, Philipp
Saxen, Frerk
Hamadi, AyoubLook up in the Integrated Authority File of the German National Library
Gruss, Sascha
Walter, Steffen
Issue Date: 2021
Type: Article
Language: English
URN: urn:nbn:de:gbv:ma9:1-1981185920-877188
Subjects: Pain recognition
Facial expression
multi-task learning
Random forest
CNN
Abstract: Prior work on automated methods demonstrated that it is possible to recognize pain intensity from frontal faces in videos, while there is an assumption that humans are very adept at this task compared to machines. In this paper, we investigate whether such an assumption is correct by comparing the results achieved by two human observers with the results achieved by a Random Forest classifier (RFc) baseline model (called RFc-BL) and by three proposed automated models. The first proposed model is a Random Forest classifying descriptors of Action Unit (AU) time series; the second is a modified MobileNetV2 CNN classifying face images that combine three points in time; and the third is a custom deep network combining two CNN branches using the same input as for MobileNetV2 plus knowledge of the RFc. We conduct experiments with X-ITE phasic pain database, which comprises videotaped responses to heat and electrical pain stimuli, each of three intensities. Distinguishing these six stimulation types plus no stimulation was the main 7-class classification task for the human observers and automated approaches. Further, we conducted reduced 5-class and 3-class classification experiments, applied Multi-task learning, and a newly suggested sample weighting method. Experimental results show that the pain assessments of the human observers are significantly better than guessing and perform better than the automatic baseline approach (RFc-BL) by about 1%; however, the human performance is quite poor due to the challenge that pain that is ethically allowed to be induced in experimental studies often does not show up in facial reaction. We discovered that downweighting those samples during training improves the performance for all samples. The proposed RFc and two-CNNs models (using the proposed sample weighting) significantly outperformed the human observer by about 6% and 7%, respectively
URI: https://opendata.uni-halle.de//handle/1981185920/87718
http://dx.doi.org/10.25673/85766
Open Access: Open access publication
License: (CC BY 4.0) Creative Commons Attribution 4.0(CC BY 4.0) Creative Commons Attribution 4.0
Sponsor/Funder: OVGU-Publikationsfonds 2021
Journal Title: Sensors
Publisher: MDPI
Publisher Place: Basel
Volume: 21
Issue: 9
Original Publication: 10.3390/s21093273
Page Start: 1
Page End: 19
Appears in Collections:Fakultät für Elektrotechnik und Informationstechnik (OA)

Files in This Item:
File Description SizeFormat 
Othman et al._Automatic_2021.pdfZweitveröffentlichung3.05 MBAdobe PDFThumbnail
View/Open