Date & time
11 a.m. – 2 p.m.
This event is free
School of Graduate Studies
Engineering, Computer Science and Visual Arts Integrated Complex
1515 Ste-Catherine St. W.
Room 3.309
Yes - See details
When studying for a doctoral degree (PhD), candidates submit a thesis that provides a critical review of the current state of knowledge of the thesis subject as well as the student’s own contributions to the subject. The distinguishing criterion of doctoral graduate research is a significant and original contribution to knowledge.
Once accepted, the candidate presents the thesis orally. This oral exam is open to the public.
Traditional topic modeling techniques, such as Latent Dirichlet Allocation (LDA), often employ standard Dirichlet distributions to model topic-word and document-topic relationships. While effective in certain contexts, these methods face limitations in capturing the complex dependencies, uncertainty, and heterogeneity in real-world data. These limitations become particularly pronounced in multimodal and cross-domain settings, where textual, visual, and spectral signals interact nonlinearly and where distribution shifts across platforms, languages, and domains are pervasive. While deep learning methods have advanced multimodal understanding and classification, many lack principled probabilistic foundations, leading to deterministic latent representations, component collapse, and poor robustness under domain shift. This dissertation addresses these challenges by introducing a unified probabilistic deep learning framework built upon expressive distributional priors, including the generalized Dirichlet, smoothed Dirichlet, and Beta-Liouville distributions. The proposed framework enables continuous latent representations that capture rich covariance structures, uncertainty, and asymmetry beyond standard Dirichlet and Gaussian assumptions. Within this paradigm, we develop a series of models spanning topic modeling, multimodal fusion, and cross-domain adaptation. We first propose a Generalized Dirichlet Variational Autoencoder (GD-VAE) for neural topic modeling, followed by smoothed Dirichlet-based multimodal architectures for fake news detection, including SmoothDetector and SD-MoBERT, which integrate probabilistic topic modeling with long-context transformer representations. To address robustness under distribution shift, we further introduce EviDA, an uncertainty-weighted domain adversarial learning framework that leverages evidential deep learning to adaptively modulate instance-level domain alignment in cross-domain and cross-lingual fake news detection. Finally, we propose PerLiFuse, a per-frequency Beta-Liouville fusion network operating in the spectral domain, which learns dynamic, example-specific gating across frequency bands to reconcile conflicting multimodal cues and mitigate fusion collapse. Extensive empirical evaluations across multiple benchmark datasets demonstrate consistent improvements in topic coherence, diversity, classification accuracy, robustness to domain shifts, and uncertainty calibration over state-of-the-art baselines. Collectively, this work establishes a principled and extensible paradigm for probabilistic deep learning, enabling interpretable, robust, and scalable models for multimodal understanding and misinformation detection in complex real-world environments.
© Concordia University