Multi-Task Learning Using Facial Features for Mental Health Screening

Year
2023
Type(s)
Author(s)
Ricardo Flores, Avantika Shrestha, ML Tlachac, and Elke Rundensteiner
Source
2023 IEEE International Conference on Big Data (BigData)
Url
https://ieeexplore.ieee.org/document/10386191

Major depressive disorder (MDD) and post-traumatic stress disorder (PTSD) are prevalent mental health conditions with severe physical and social impacts. They are expensive and detection is difficult, requiring substantial time from trained mental professionals. To alleviate this issue, recent studies explore the diagnostic potential of deep learning models trained on modalities extracted from clinical interview videos, conducted by a virtual agent. However, deep learning models are challenging to train because of the long sequences and small number of participants common in the mental health community. To combat these challenges, we leverage multi-task learning, using temporal facial features as input, to screen for MDD and PTSD. The multi-task architecture is based on a bidirectional GRU model with self-attention. We evaluate our multi-task model on temporal facial features extracted from responses to 15 clinical interview questions conducted by a virtual agent. The results suggest that multi-task learning increases the generalization performance compared to single-task learning. For MDD screening, multi-task learning improved the balanced accuracy over single-task learning for 11 of the 15 datasets. In fact, our multi-learning model increased the MDD screening ability by 25 percent to a balanced accuracy of 0.87 in some scenarios. This work provides valuable findings for the future of mental screening applications leveraging temporal facial features.