DeepScreen: Boosting Depression Screening Performance with an Auxiliary Task

Year
2023
Type(s)
Author(s)
Ricardo Flores, Avantika Shrestha, and Elke Rundensteiner
Source
2023 IEEE International Conference on Big Data (BigData)
Url
https://ieeexplore.ieee.org/document/10386595

Depression is a prevalent mental health condition with severe impacts on physical and social health. It is costly and difficult to detect, requiring substantial time from trained mental professionals. To alleviate this burden, recent research explores the diagnostic capabilities of deep learning models trained on modalities extracted from videos of clinical interviews for depression screening. However, training deep learning models is challenging because in the mental health domain datasets contain a small number of patients. To address this challenge, we propose DeepScreen, a recurrent deep-learning architecture for depression screening whose performance is boosted by deploying a self-supervised auxiliary task for selective missing value imputation. DeepScreen leverages a multi-task architecture with a bidirectional recurrent deep learning model and a self-attention mechanism, jointly optimizing the supervised depression and the self-supervised auxiliary task. Our first study assesses the capability of the auxiliary task training of DeepScreen under different correlation levels and masking sub-sequence sizes of multi-variate time series. Found to be effective, our second study evaluates DeepScreen on 15 data sets composed of real-world temporal facial landmark features extracted from responses to different clinical interview questions. The results achieved across all 15 datasets demonstrate that the imputation task boosts the depression prediction metrics significantly. Specifically, DeepScreen improves the F1 score for one of the datasets by 57%. Further, our best-performing model achieves an F1 score of 0.85. This work provides valuable insights into improving deep learning driven mental screening applications by leveraging auxiliary tasks such as imputation for learning a better representation even from small datasets.