When studying for a doctoral degree (PhD), candidates submit a thesis that provides a critical review of the current state of knowledge of the thesis subject as well as the student’s own contributions to the subject. The distinguishing criterion of doctoral graduate research is a significant and original contribution to knowledge.
Once accepted, the candidate presents the thesis orally. This oral exam is open to the public.
Abstract
Sparse recovery is a clear manifestation of Occam's razor in the mathematics of data thanks to its ability to favor the simplest explanation. A prominent example is the reconstruction of a sparse vector (i.e., one with mostly zero or negligible coefficients) from linear measurements, possibly corrupted by noise. This problem arises in numerous applications across various domains, including medical imaging and high-dimensional function approximation. Greedy sparse recovery algorithms approach this problem by recursively solving local optimization problems and have proven to be efficient alternatives to convex methods.
In this dissertation, we first develop weighted generalizations of Orthogonal Matching Pursuit (OMP)--one of the most popular greedy sparse recovery algorithms--based on two distinct weighting strategies aimed at incorporate a priori knowledge about the signal structure. These generalizations feature explicit and theoretically justified greedy index selection rules. The first strategy establishes a novel connection between greedy sparse recovery and convex relaxation methods, particularly the LASSO family, resulting in new OMP-based algorithms suited to a wide range of loss functions. Numerical results show that these greedy variants inherit key traits from their ancestor convex decoders. For the second strategy, we provide optimal recovery guarantees for rescaling-based weighted OMP under the weighted Restricted Isometry Property (wRIP), extending the classical RIP-based analysis to the weighted setting.
The second part of the thesis addresses the non-differentiability of greedy algorithms caused by the (arg)sorting operator by employing ``softsorting", enabling differentiable, neural-network-compatible versions of these algorithms. We provide rigorous theoretical analysis and numerically demonstrate that these ``soft" algorithms reliably approximate their original counterparts. We then link our weighted greedy solvers to neural architectures, by embedding their iterations into neural networks using the ``algorithm unrolling" paradigm, treating weights as meaningful trainable parameters. This approach not only advances the development of data-driven methods for sparse recovery, but also marks a significant step toward building safer and more trustworthy neural networks. Finally, we further extend this framework to deterministic compressed sensing and to learning additional parameters beyond weights.