Skip to main content
Thesis defences

PhD Oral Exam - Emimal Jabason, Electrical and Computer Engineering

Neuroimaging Fusion in Nonsubsampled Shearlet Domain by Maximizing the High-Frequency Subband Energy and Classification of Alzheimer's Disease using Local and Global Contextual CNN Features of Neuroimaging Data


Date & time
Thursday, April 25, 2024
10 a.m. – 1 p.m.
Cost

This event is free

Organization

School of Graduate Studies

Contact

Nadeem Butt

Where

Engineering, Computer Science and Visual Arts Integrated Complex
1515 St. Catherine W.
Room 002.301

Wheel chair accessible

Yes

When studying for a doctoral degree (PhD), candidates submit a thesis that provides a critical review of the current state of knowledge of the thesis subject as well as the student’s own contributions to the subject. The distinguishing criterion of doctoral graduate research is a significant and original contribution to knowledge.

Once accepted, the candidate presents the thesis orally. This oral exam is open to the public.

Abstract

Neuroimaging techniques have revolutionized the understanding and diagnosis of neurodegenerative diseases such as Alzheimer’s disease by providing information about the structural and functional changes in the brain. The structural changes refer to the changes in the structure of the brain such as shrinkage of hippocampus, enlarged ventricles, and variations in cortical thickness, whereas the functional changes refer to the functional changes of the brain such as the metabolic activity of neurons, local changes in blood flow, and regional composition of the brain. However, the imaging techniques used to capture these two types of changes in the brain are different. For example, structural changes can be captured by structural MRI and CT, and functional changes can be captured by functional MRI and PET. However, interpreting and correlating information obtained from multiple imaging techniques can be challenging. The desire to have complementary information in a single image has led to investigations in multimodal image fusion. Recently, transform domain approaches with multiscale and multidirectional properties are known to be more effective in handling multimodal image fusion problems because of improved data representation, energy compaction, and reduced complexity. However, prior distributions used to model the transform domain coefficients do not adequately represent the actual distribution of the coefficients of neuroimaging data. Moreover, a transform domain coefficient of the fused image is selected as the corresponding coefficient of the transform domain representation of one of the constituent images that has the largest magnitude. However, this approach leads to a fused image that may not necessarily have the highest possible energy.

The most common neurodegenerative disease is Alzheimer’s disease, a progressive brain disorder affecting millions of people around the world. There is currently no cure for this disease. However, detection and classification of the disease are important for the care of the patient. The recent advancements in convolutional neural networks (CNNs) have leveraged the classification of Alzheimer’s disease because of their automatic feature generation capability. However, most of the existing CNN-based methods often disregard the local features of the brain data, which leads to a loss of subtle fine-grained features in the brain imaging data. Moreover, the existing CNN architectures, which mainly rely on global features, do not pay much attention to the discriminability of the extracted features for the task of classification of Alzheimer’s disease. Moreover, the existing architectures often end up using a large number of parameters to enhance the richness of the extracted features.

The objective of this thesis is twofold: first, to investigate the suitability of a transform for optimally representing neuroimaging data, to study the statistical properties of the transform coefficients, and then to develop a statistically driven approach for the fusion of multimodal neuroimaging data and second, to design a lightweight deep CNN capable of extracting both local and global contextual features for improved classification performance.

In the first part of the thesis, we develop a novel multimodal fusion algorithm based on the statistical properties of nonsubsampled shearlet transform (NSST) coefficients using a novel energy maximization fusion rule. The marginal distributions of the high-frequency NSST coefficients exhibit heavier tails than the Gaussian distribution. As a consequence, we use a heavy-tailed probability density function to describe the highly non-Gaussian statistics of the empirical NSST coefficients by determining the parameters using maximum likelihood estimation. This model is then employed to develop a maximum a posteriori estimator to obtain noise-free coefficients. Next, a novel fusion rule for obtaining fused NSST coefficients based on maximizing the energy in the high-frequency subbands is proposed.

In the second part of the thesis, a novel lightweight deep CNN, which extracts local and global contextual features and uses them for the classification of Alzheimer’s disease, is proposed. The main idea used in designing the proposed network is to process separately the local and global features by using modules that pay special attention to extract local and global contextual features pertinent to the classification task of Alzheimer’s disease. Finally, the impact of fused images, obtained using the fusion scheme proposed in the first part, on the classification accuracy of Alzheimer’s disease is investigated.

Extensive experiments are carried out to validate the effectiveness of the various ideas and strategies used in this thesis to develop the schemes for multimodal neuroimaging fusion and classification of Alzheimer’s disease.

Back to top

© Concordia University