When studying for a doctoral degree (PhD), candidates submit a thesis that provides a critical review of the current state of knowledge of the thesis subject as well as the student’s own contributions to the subject. The distinguishing criterion of doctoral graduate research is a significant and original contribution to knowledge.
Once accepted, the candidate presents the thesis orally. This oral exam is open to the public.
Abstract
Generative models aim to transform a simple latent distribution into a complex data distribution, enabling the synthesis of high-dimensional, realistic data. In contrast, generative model inversion addresses the reverse process, mapping a complex data distribution back into a simple latent representation. In this thesis we introduce several novel contributions to architecture-agnostic algorithms of generative models and their inversions, as well as applications utilizing these methods. First, we show that using multiple adversarial losses improves the performance and requires fewer hyperparameters than using an auxiliary classifier. Then we introduce a novel encoder-based GAN inversion method for better convergence than a simple mean squared error by dynamically adjusting the scale of each element of the latent random variable. Furthermore, we propose an out-of-distribution detection method that leverages the log probability of the latent vector predicted by the encoder-based GAN inversion framework. Next, we introduce a novel method that combines the perceptual VAE and the GAN inversion technique from the second contribution to improve the GAN inversion performance. Finally, we introduce a novel GAN that allows the model to perform self-supervised class-conditional data generation and clustering using a classifier gradient penalty loss.