e-Περιοδικό Επιστήμης & Τεχνολογίας

e-Journal of Science & Technology, (e-JST)

 

IMPROVING THE CLASSIFICATION ACCURACY OF COMPUTER AIDED DIAGNOSIS THROUGH MULTIMODALITY BREAST IMAGING

Nikolaos Pagonis1, Dionisis Cavouras2, Kostas Sidiropoulos3, George Sakellaropoulos1 and George Nikiforidis1

1 Medical Image Processing and Analysis Group (MIPA), Laboratory of Medical Physics, School of Medicine, University of Patras, 26500 Rio, Greece

2 Medical Image and Signal Processing Laboratory, Department of Medical Instrumentation Technology, Technological Educational Institute of Athens, 12210 Athens, Greece

3 School of Engineering and Design, Brunel University West London, Uxbridge,

Middlesex, UB8 3PH, UK, e-mail: Konstantinos.Sidiropoulos@brunel.ac.uk

e-mail: cavouras@teiath.gr, web page: http://medisp.bme.teiath.gr

 

Abstract. The purpose of the present study is to evaluate the effect of using multiple modalities on the accuracy achieved by a computer-aided diagnosis system, designed for the detection of breast cancer. Towards this aim, 41 cases of breast cancer were selected, 18 of which were diagnosed as malignant and 23 as benign by an experienced physician. Each case included images acquired by means of two imaging modalities: x-ray and ultrasound. Manual segmentation was performed on every image in order to delineate and extract the regions of interest (ROIs) containing the breast tumors. Then 104 textural features were extracted; 52 from the x-ray images and 52 from the US images. A classification system was designed using the extracted features for every case. Firstly, features extracted from x-ray images alone were used to evaluate the system. The same task was performed for features extracted from US images alone. Lastly the combination of the two feature sets, mentioned afore, was used to evaluate the system. The proposed system that employed the Probabilistic Neural Network (PNN) classifier scored 78.05% in classification accuracy using only features from x-ray. While classification accuracy increased at 82.95% using only features from US, a significant increase in the system’s accuracy (95.12%) was achieved by using combined features from both x-ray and US. In order to minimize total training time, the proposed system adopted the Client-Server model to distribute processing tasks in a group of computers interconnected via a local area network. Depending on the number of clients employed, there was about a 4-fold reduction in training time employing 7 clients.

Keywords: Image Analysis; Pattern Recognition; Multimodality; Breast Cancer; US X-RAY;

[return]