Date & time
1 p.m. – 4 p.m.
This event is free
School of Graduate Studies
Engineering, Computer Science and Visual Arts Integrated Complex
1515 Ste-Catherine St. W.
Room 3.309
Yes - See details
When studying for a doctoral degree (PhD), candidates submit a thesis that provides a critical review of the current state of knowledge of the thesis subject as well as the student’s own contributions to the subject. The distinguishing criterion of doctoral graduate research is a significant and original contribution to knowledge.
Once accepted, the candidate presents the thesis orally. This oral exam is open to the public.
This dissertation presents four main contributions to the field of cyber-security of autonomous vehicles against adversarial artificial intelligence attacks.
The first contribution introduces a vision-based covert attack framework utilizing Generative Adversarial Networks to manipulate lane-keeping systems through synthetic top-view image generation. The attack employs an encoder-shared layers-decoder architecture with geometric conditioning mechanisms to transform authentic camera imagery showing lateral deviation into falsified centered appearances that deceive perception systems. Evaluated within Unreal Engine simulation environments, the framework demonstrates that generative models can create photorealistic synthetic road scenes maintaining sufficient quality to evade human observation while successfully manipulating autonomous vehicle trajectories.
The second contribution advances attack sophistication through diffusion-based synthesis, addressing limitations of the GAN approach. This generation employs latent diffusion models adapted for automotive view synthesis with explicit geometric control through cross-attention, adaptive group normalization, and feature-wise linear modulation layers. A critical component is the creation of the Montreal Urban Driving Dataset, comprising paired authentic road scene images collected via custom dual-camera hardware across diverse environmental conditions. This dataset bridges the simulation-to-real domain gap identified in the first contribution.
The third contribution develops coordinated hybrid attacks targeting multiple sensor modalities simultaneously. VideoDiff-VCA leverages video diffusion models with temporal attention mechanisms and rolling shutter consistency losses to generate temporally coherent synthetic video sequences, while GPS Spoof-Net produces Doppler-consistent satellite signals through software-defined radio. The coordination mechanism ensures spatial and temporal consistency between falsified vision and navigation data, defeating sensor fusion defenses that successfully detected single-modality attacks.
The fourth contribution presents a unified multi-modal forensic detection framework integrating complementary features across vision, navigation, and control modalities. The system extracts photo response non-uniformity patterns, rolling shutter geometry, temporal consistency metrics, Doppler residuals, position-velocity consistency, and actuator-observation relationships. Statistical fusion through Mahalanobis distance with adaptive temporal filtering provides robust attack detection while maintaining real-time computational feasibility on automotive-grade hardware.
The dissertation concludes with analysis of top-view representation limitations, simulation-to-real transfer challenges, and future research directions including federated learning security, self-healing adaptive defenses, and hardware-in-the-loop validation.
© Concordia University