Skip to main content
Workshops & seminars

Tackling distribution shift for reliable and data-efficient deep learning

Date & time
Monday, March 11, 2024
10 a.m. – 11:30 a.m.

Changjian Shui


This event is free



ER Building
2155 Guy St.
Room ER-1072

Wheel chair accessible



Modern artificial intelligence incorporates deep learning to learn useful information from complex and high-dimensional data and achieves remarkable practical success. However, significant concerns have recently emerged - (1) Reliability. The deep learning model lacks robustness and exhibits prediction bias in subpopulations. (2) Data scarcity. Training a deep model necessitates big data, which is often infeasible in practice. In my talk, I will show the importance of tackling distribution shifts to address these concerns. Specifically, I will present several theoretical principled tools - (a) Invariance, learning an invariant representation by eliminating nuisance factors for robustness. (b) Optimization, optimizing on all the subgroup distributions equally for fairness.  (c) Selection, learning limited labeled data by selecting useful information from the relevant data. I will further explore how these ideas are being applied in the real-world such as healthcare.



Changjian Shui ( is a Postdoc Fellow at Vector Institute. He received his Ph.D. degree from Université Laval in 2022 and completed his postdoctoral research at McGill University/Mila in 2023. He received the diplôme d'ingénieur from Télécom Paris in 2015 and Master of applied math from Université Paris Saclay in 2016. His research focuses on machine learning under distribution shift and its applications in trustworthy AI, few-data learning, computer vision and healthcare. His research has appeared in major AI venues such as NeurIPS, ICML, ICLR, JMLR, IJCAI, AAAI, AISTATS, IEEE TKDE. He has been honored as an outstanding reviewer at major AI venues – ICLR, ICML, NeurIPS and TMLR.

Back to top

© Concordia University