Skip to main content
notice

Doctoral Seminar: Mina Yousefi

March 24, 2016
|


Speaker: Mina Yousefi

Supervisors: Drs. A. Krzyzak, C. Y. Suen

Supervisory Committee:
Drs. T. D. Bui, N. Kharma, S. P. Mudur

Title:  Computer-aided Diagnostic System for Breast Cancer Detection Using Tomosynthesis Images

Date: Thursday, March 24, 2016

Time: 11:40AM

Place: EV 3.309

ABSTRACT

Breast cancer is the most common cancer among women in the world. However, early detection of breast cancer improves the chance of successful treatment. Mammography is a common approach to early cancer detection by radiologists. In the last decade a new technology of digital tomosynthesis was introduced in the field of breast cancer screening. It produces multiple x-ray images of breast which are taken at different angles. These images are subsequently processed to detect cancer or early symptoms of breast diseases which most popular of them are irregular shape masses, small classify dots, architectural distortion and bilateral asymmetry which may be missed in standard 2-D mammography. The aim of our research is to introduce an automatic computer aided diagnosis (CAD) framework using a variety of computer vision techniques to detect early signs of breast cancer in digital tomosynthesis images.

Our system has a form of three modules while each of them separately deals with one kind of early sign of breast cancer as mass, micro calcify dots and bilateral asymmetry between left and right breast. To detect micro calcification, we use the Laplacian of Gaussian filter plus preprocessing and post analyzing. For bilateral asymmetric analysis, we apply the thin plate spline registration algorithm and a set of region comparison metrics. In mass detection two approaches are applied, both have common preprocessing and segmentation algorithms but apply different techniques to classification of DBT images. One extracts proper features then uses them to classify DBT image by multiple instance learning method while the other one applies the deep convectional network which extracts features automatically to classify images. The deep neural network
structure consist of three convolutional layers, all of them followed by a rectified linear unit and max pooling layers and two fully connected one Softmax layer and two multiple instance learning layers. We conducted various experiments on our data set to prove that our system works better than nonautomatic regular frameworks which extract various features to classify medical images.




Back to top

© Concordia University