Skip to main content
Thesis defences

PhD Oral Exam - Mathieu Dugré, Computer Science

Dynamic Mixed Precision in Gradient Descent Optimization: Framework and Evaluation in 3D MRI Registration


Date & time
Monday, June 29, 2026
9 a.m. – 12 p.m.
Cost

This event is free

Organization

School of Graduate Studies

Contact

Dolly Grewal

Where

Online

When studying for a doctoral degree (PhD), candidates submit a thesis that provides a critical review of the current state of knowledge of the thesis subject as well as the student’s own contributions to the subject. The distinguishing criterion of doctoral graduate research is a significant and original contribution to knowledge.

Once accepted, the candidate presents the thesis orally. This oral exam is open to the public.

Abstract

Mixed precision, combining low- and high-precision arithmetic, has demonstrated significant promise for accelerating computation in neural network training. However, its adoption remains concentrated in deep learning, with limited investigation into scientific computing domains. A critical gap exists in understanding how mixed precision affects numerical stability and convergence in optimization-based algorithms for scientific applications. This thesis addresses this gap by investigating mixed precision in the context of magnetic resonance imaging registration, an essential and computationally intensive step in neuroimaging pipelines. Safely leveraging mixed precision to accelerate both classical and deep learning-based registration methods without sacrificing accuracy requires a comprehensive analysis of its numerical stability and impact on convergence.

This thesis has three primary aims: (1) characterizing the performance bottlenecks of neuroimaging applications, (2) investigating the benefits of mixed precision for magnetic resonance imaging registration, and (3) developing a general framework for mixed precision in gradient-based optimization. Addressing the first aim, our profiling reveals that linear interpolation and memory throughput are the primary bottlenecks in classical workflows. Notably, this analysis also uncovered a casting bug in the Insight Toolkit (ITK) library that unexpectedly degrades single-precision performance, making it slower than double precision. For the second aim, our in-depth evaluation of ANTs registration demonstrates that the pipeline can be effectively accelerated by leveraging data formats with precision as low as 13 bits. Finally, we introduce DynoMP, a general-purpose framework that dynamically predicts the optimal precision at each step of a gradient-based optimization process. We demonstrate DynoMP's versatility across paradigms ranging from classical iterative registration (ANTs) to deep learning models (MNIST, VoxelMorph). Through mixed-precision simulations using VPREC, we validate its potential for significant computational efficiency gains while preserving baseline accuracy. Ultimately, the framework and findings presented in this thesis establish a foundation for understanding the impact of mixed precision across general gradient-based optimization problems.

Back to top

© Concordia University