Date & time
10 a.m. – 1 p.m.
This event is free
School of Graduate Studies
ER Building
2155 Guy St.
Room 1202
Yes - See details
When studying for a doctoral degree (PhD), candidates submit a thesis that provides a critical review of the current state of knowledge of the thesis subject as well as the student’s own contributions to the subject. The distinguishing criterion of doctoral graduate research is a significant and original contribution to knowledge.
Once accepted, the candidate presents the thesis orally. This oral exam is open to the public.
Despite rapid commercialization, the safety of multi-module Autonomous Driving Systems (ADSs) is hindered by a fractured testing landscape. Current validation methods suffer from a fundamental divide: component- and module-level tests ignore inter-module error propagation, while system-level tests rely on black-box assumptions that obscure internal logic. Consequently, neither approach can systematically exploit internal weaknesses. Consequently, existing literature fails to rigorously evaluate how component-level vulnerabilities cascade into system-wide catastrophes.
To bridge this gap, this thesis proposes a comprehensive, multi-level fuzzing framework designed to rigorously evaluate the end-to-end robustness and safety of ADSs. The research begins by evaluating perception integrity at the scene level, introducing a perception-guided fuzzing framework that leverages high-fidelity simulation to mutate driving scenarios. Our evaluations revealed the perception module's sensitivity to subtle, unanticipated environmental noise, as well as critical delays that severely degraded perception quality.
To systematically investigate these phenomena, we developed two subsequent frameworks targeting perception robustness against noise and latency. First, to address data fidelity, we designed a framework that injects specification-based and empirically derived noise into sensor inputs, proving that physically plausible perturbations induce non-trivial detection failures. Second, to address computational availability, we introduced a performance-testing framework that generates latency-increasing perturbations. Our extensive evaluations demonstrate that both data-level noise and computational latency propagate downstream to cause system-level failures. Ultimately, the proposed multi-level frameworks provide a systematic approach to evaluating ADS robustness, exposing critical vulnerabilities that threaten real-world deployments and offering insights for the future of autonomous software testing.
© Concordia University