FME '22: Proceedings of the 2nd Workshop on Facial Micro-Expression: Advanced Techniques for Multi-Modal Facial Expression Analysis
FME '22: Proceedings of the 2nd Workshop on Facial Micro-Expression: Advanced Techniques for Multi-Modal Facial Expression Analysis
SESSION: Keynote Talk
Vision based Physiological and Emotional Signal Analysis with Application to Mental Disorder Diagnosis
- Hu Han
Face images and videos contain rich visual biometric signals from apparent signals like attribute and identity characteristics to subtle signals corresponding to physiological and emotional states. Benefit from the great success of deep learning methods, tremendous progress has been made on apparent visual signals analysis. However, subtle signal analysis still faces big challenges: indistinguishable pattern, low PSNR, and transient duration. Attempts to resolve these challenges usually rely on engineering designs to extract and enhance the subtle signals. Our recent work aims to improve the robustness of physiological and emotional signal analysis via signal disentanglement, context modeling, and semi-supervised learning. Since people with mental disorders is likely to demonstrate subtle visual signals, we also propose to fuse individual face visual signals to perform mental disorder diagnosis like AD apathy and anxiety prediction.
SESSION: Workshop Presentations
MTSN: A Multi-Temporal Stream Network for Spotting Facial Macro- and Micro-Expression with Hard and Soft Pseudo-labels
- Gen Bing Liong
- Sze-Teng Liong
- John See
- Chee-Seng Chan
This paper considers the challenge of spotting facial macro- and micro-expression from long videos. We propose the multi-temporal stream network (MTSN) model that takes two distinct inputs by considering the different temporal information in the facial movement. We also introduce a hard and soft pseudo-labeling technique to enable the network to distinguish expression frames from non-expression frames via the learning of salient features in the expression peak frame. Consequently, we demonstrate how a single output from the MTSN model can be post-processed to predict both macro- and micro-expression intervals. Our results outperform the MEGC 2022 baseline method significantly by achieving an overall F1-score of 0.2586 and also did remarkably well on the MEGC 2021 benchmark with an overall F1-score of 0.3620 and 0.2867 on CAS(ME)2 and SAMM Long Videos, respectively.
A More Objective Quantification of Micro-Expression Intensity through Facial Electromyography
- Shaoyuan Lu
- Jingting Li
- Yan Wang
- Zizhao Dong
- Su-Jing Wang
- Xiaolan Fu
Micro-expressions are facial expressions that individuals reveal when trying to hide their genuine emotions. It has potential applications in areas such as lie detection and national security. It is generally believed that micro-expressions have three essential characteristics: short duration, low intensity, and local asymmetry. Most previous studies have investigated micro-expressions based on the characteristic of short duration. To our knowledge, no empirical studies have been conducted on the low-intensity characteristic. In this paper, we use facial EMG for the first time to study the characteristic of low intensity for micro-expression. In our experiment, micro-expressions were elicited from subjects and simultaneously collected their facial EMG through the second-generation micro-expression elicitation paradigm. We collected and annotated 33 macro-expressions and 48 micro-expressions. By comparing the two indicators of EMG :(1) the percentage of apex value in maximum voluntary contraction (MVC%) and (2) the area under EMG signal curve (integrated EMG, iEMG), we found that the MVC% and iEMG of micro-expression were significantly smaller than that of macro-expression. The result demonstrates that the intensity of micro-expression is significantly smaller than that of macro-expression.