Aortic dissection and rupture are fatal complications that happen when the aortic tissue’s integrity is compromised, leading to fatal consequences. Once an aortic dissection takes place, 41% of patients do not even make it to the hospital. Unfortunately, the diagnostic outlook is not much brighter. It is estimated that 40% of patients presenting with aortic dissection do not meet the current diagnostic criteria.
This thesis aims to assess the risk levels of thoracic aortic aneurysms’ dissection and rupture from patients’ echocardiograms. To do this, we study the effects of spatial and temporal learning of the heart’s movement in the echocardiograms. We investigate the pure visual learning from still 2D images extracted from the echocardiogram’s sequence, then assess the temporal learning across frames in the echocardiogram video by incorporating 3D convolutions over the whole sequence, and in terms of aggregating the visually learned content of each frame in the echocardiogram over the sequence length. We also experiment with implementing a visual attention mechanism to filter out the visual context. Finally, we study the effect of adding a tabular data learning stream to our architecture that learns from the patient’s tabular data information and incorporates it into the best-performing model. The results of this thesis - although not conclusive- suggest that temporal dependencies are present between echocardiogram frames throughout the video, which points out the diagnostic importance of analyzing the movement of the beating heart tissue through time.