Date & time
12:30 p.m. – 3:30 p.m.
This event is free
School of Graduate Studies
ER Building
2155 Guy St.
Room 1222
Yes - See details
When studying for a doctoral degree (PhD), candidates submit a thesis that provides a critical review of the current state of knowledge of the thesis subject as well as the student’s own contributions to the subject. The distinguishing criterion of doctoral graduate research is a significant and original contribution to knowledge.
Once accepted, the candidate presents the thesis orally. This oral exam is open to the public.
This thesis explores explainable deep learning methods for the early detection of mental health risks from social media posts. We focus on models that not only detect risk early, but also provide clear post-level explanations that align with their decision process.
First, we examine baseline architectures for user-level early detection of anorexia within the CLEF eRisk framework and report on our system’s performance in the 2019 shared task. Our submission combined attention-based neural sub-models with a final Support Vector Machine (SVM) classifier and ranked first on the official metrics, with an F1 of 0.7073 and a latency-weighted F1 of 0.6908. An analysis of the user-level attention as post-level evidence shows that a small subset of posts often drives the final decision.
Second, we introduce AtteFa, a quantitative metric for attention faithfulness based on a single-run adversarial setup. The method trains an adversarial model to match the base model’s predictions while diverging in attention, allowing us to measure how much attention can change without affecting outputs. Across synthetic and real datasets, AtteFa reveals when attention aligns with the model’s decision process and when it does not.
Finally, we present a user-aware attention framework built on DisorBERT and fine-tuned end-to-end. We pair this model with adversarial training and evaluate faithfulness using AtteFa. On four eRisk tasks (depression, anorexia, self-harm, and pathological gambling), the approach maintains competitive latency-weighted F1 scores while producing transparent post-level explanations. Our analysis further shows that decisions typically rely on a small portion of each user’s timeline, highlighting both the sparsity of strong mental health signals and the value of faithful explanations.
Taken together, these contributions build a step-by-step path: from strong early detection, to a principled measure of attention faithfulness, to a practical, user-aware model whose explanations are both transparent and aligned with its decision process.
© Concordia University