• Users Online: 543
  • Print this page
  • Email this page

 Table of Contents  
ORIGINAL ARTICLE
Year : 2020  |  Volume : 10  |  Issue : 1  |  Page : 12-18

A Semi-supervised method for tumor segmentation in mammogram images


1 School of Computer Engineering, Iran University of Science and Engineering, Tehran, Iran
2 Faculty of Computer Science and Engineering, Shahid Beheshti University, Tehran, Iran

Date of Submission13-Dec-2018
Date of Decision08-May-2019
Date of Acceptance25-Oct-2019
Date of Web Publication06-Feb-2020

Correspondence Address:
Prof. Monireh Abdoos
Number 323, Faculty of Computer Science and Engineering, Shahid Beheshti University, Velenjak, Tehran
Iran
Login to access the Email id

Source of Support: None, Conflict of Interest: None


DOI: 10.4103/jmss.JMSS_62_18

Rights and Permissions
  Abstract 


Background: Breast cancer is one of the most common cancers in women. Mammogram images have an important role in the treatment of various states of this cancer. In recent years, machine learning methods have been widely used for tumor segmentation in mammogram images. Pixel-based segmentation methods have been presented using both supervised and unsupervised learning approaches. Supervised learning methods are usually fast and accurate, but they usually use a large number of labeled data. Besides, providing these samples is very hard and usually expensive. Unsupervised learning methods do not require the labels of the training data for decision making and they completely ignore the prior knowledge that may lead to a low performance. Semi-supervised learning methods which use a small number of labeled data solve the problem of providing the high number of samples in supervised methods, while they usually result in a higher accuracy in comparison to the unsupervised methods. Methods: In this study, we used a semisupervised method for tumor segmentation in which the pixel information is used for the classification. The static and gray level run length matrix features for each pixel are considered as the features, and Fisher discriminant analysis (FDA) is used for feature reduction. A cotraining algorithm based on support vector machine and Bayes classifiers is proposed for tumor segmentation on MIAS data set. Results and Conclusion: The results show that the proposed method outperforms both supervised methods.

Keywords: Bayes classifier, co-training algorithm, mammogram images, support vector machine classifier, tumor segmentation


How to cite this article:
Azary H, Abdoos M. A Semi-supervised method for tumor segmentation in mammogram images. J Med Signals Sens 2020;10:12-8

How to cite this URL:
Azary H, Abdoos M. A Semi-supervised method for tumor segmentation in mammogram images. J Med Signals Sens [serial online] 2020 [cited 2020 Feb 24];10:12-8. Available from: http://www.jmssjournal.net/text.asp?2020/10/1/12/277824




  Introduction Top


Breast cancer is the most common cancer in women worldwide. According to the study done in 2003 by the American Cancer Society, about 12% of U. S. women had breast cancer over the course of their lifetime. In European countries, breast cancer accounts for 24% of all types of cancer and 19% of cancer deaths. In 2010, the Iran's Ministry of Health announced that more than 7000 women were diagnosed with breast cancer and more than 4000 people died because of that every year.[1] Automatic detection of disease from medical images forms the major part of researchers in machine learning and medical engineering fields.[2] Mammography is one of the most commonly used tests for breast cancer diagnosis. Today, several methods have been proposed for tumor segmentation in mammography images.[3],[4] A review of automatic tumor segmentation methods for breast cancer has been provided in the literatures.[5],[6],[7] These methods can be considered as six groups:

  1. Contour-based segmentation approaches such as active contour algorithm[8],[9],[10]
  2. Region-based growing segmentation techniques[11],[12],[13],[14],[15],[16]
  3. Segmentation using two-dimensional discrete wavelet transform[17],[18]
  4. Segmentation based on watershed algorithm[19],[20],[21]
  5. Segmentation with co-occurrence matrix[22],[23],[24]
  6. Classification-based segmentation including supervised and unsupervised learning methods.[25],[26],[27],[28],[29]


Tao et al. proposed a classifier for tumor segmentation,[25] in which region of interest (ROI) is divided into some sub-regions and machine learning techniques used for labeling each sub-region. Graph-cut algorithm and optimization method were used for final segmentation.

Dynamic programming method has also been used for segmentation.[26],[27] In Song et al.'s study,[26] first, a plane-fitting method was used to extract the ROI, and then optimal contour of the mass was extracted using a dynamic programming approach. In Song et al.'s study,[27] a similar method was used for tumor segmentation, in which a template matching method was used along with the dynamic programming approach. The results showed that the template matching approach outperforms plane-fitting in this area.

Supervised methods are usually fast and accurate, but they need a large number of labeled data. Providing labels for the data is very hard and expensive.[1],[30],[31] Unsupervised methods do not use labels for decision-making and they may lead to poor performance because they do not use prior knowledge of the samples.[31],[32],[33] Clustering methods have also been used for pixel labeling in mammogram images in the studies by Shi et al. and Kamil and Salih.[28],[29] Skin–air boundary estimation using gradient weight map and pectoral-breast boundary detection using clustering approach were done in a study by Shi et al.[28] A texture filter was used for final detection. In a study by Kamil and Salih,[29] K-means and fuzzy C-mean were used for tumor segmentation. To improve the performance, lazy snapping algorithm was used as an additional step.

Semi-supervised learning is motivated by the fact that providing unlabeled data is easy and therefore it can be used to improve the accuracy of classifiers. Semi-supervised methods such as self-training algorithm and co-training algorithm dominate the problem of providing the high number of samples in supervised methods because they need a small set of labeled samples. These methods have a higher accuracy in comparison to the unsupervised methods.[1],[31] Some methods in breast tumor segmentation used intensity feature of each pixel to segment the tumor from medical images. It can be seen that texture features have only been used for the diagnosis of benign and malignant breast tumor.[34],[35],[36]

In this study, a semi-supervised method is proposed for tumor segmentation in mammography images. A co-training algorithm is used for the segmentation according to pixel-based features.[1] The study is organized as follows: in section “Basics”, basic concepts including feature extraction and reduction methods and co-training algorithm are presented. The proposed co-training algorithm is presented in section “The proposed method”. In section “Experimental results”, the experimental results and evaluation of the proposed approach in comparison to the supervised methods are presented. Finally, the paper is concluded in section “Conclusions”.


  Basics Top


In this section, pixel-based features used in the proposed method are described. The dimensionality of the features is reduced according to the Fisher discriminant analysis (FDA) method, which is described in the following sections in details. Then, the co-training method is described in the last part.

Feature extraction

In this study, we have used two methods for feature extraction: static features and gray level run length matrix (GLRLM) features. For each pixel in the ROI, we have used a 5 × 5 window for feature extraction as in the study by Azmi et al.[1]

Statistical features have been obtained using static methods. These features are mean, variance, absolute deviation, and standard deviation. Run-length matrix is defined in such a way that each element (i, j) of the matrix shows the number of runs with pixels of gray level intensity equal to i and length of run equal to j beside a particular direction. In this study, four directions have been used: 0°, 45°, 90°, and 135° as shown in [Figure 1].
Figure 1: Direction of run-length matrix

Click here to view


The GLRLM features are obtained using Eqs. (1) to (4).

  1. Short-run emphasis:




  2. Long-run emphasis:




  3. Gray-level nonuniformity:




  4. Run-length nonuniformity




In the above equations, I(i, j) is defined as the number of runs with pixels of the gray level i and the run length j. nr is also the total number of runs. There are 11 features [Appendix]A obtained from run-length matrix in each direction including θ = 0°, 45°, 90°, and 135°,[1] and therefore, the number of these features is 44 which are extracted from GLRLM. Moreover, we use four features obtained from the static method for each pixel. Hence, the number of features is 48 for each pixel.



Feature reduction

FDA is a popular method for linear supervised dimension reduction. In this step, the dimension of the extracted features is reduced using the FDA by reducing the within-class scatter and increasing between-class scatter. FDA is closely related to the principal component analysis (PCA), which is based on linear transformations. This method has some properties: it minimizes the mean square error in data compression, finds mutually orthogonal directions in the data with maximum variances, and reduces the correlation of the data using orthogonal transformations. In data compression, PCA finds a smaller dimensional linear representation of the data vectors, so that the reconstruction of the original data can be done with minimum square error.[37] PCA does not consider differences within the class. However, in the FDA, the transformation is based on maximizing a ratio of between-class variance to within-class variance. The goal is to decrease the variation of data in the same class and to increase this measure between the classes. [Figure 2] depicts an example of the transformation in the FDA.
Figure 2: An example for Fisher discriminant analysis: (a) The data before transformation and (b) the same data after transformation

Click here to view


[Figure 2]a depicts samples of two classes (shown in different colors) and the histograms, which results from a projection to a line connecting the class means. There is an overlapping area in the projected space. [Figure 2]b shows the equivalent projection based on the FDA, which shows an improvement on the class separation.[38]

Semi-supervised learning

Semi-supervised learning is a kind of supervised learning techniques, which uses both labeled and unlabeled data for training. Training set is usually composed of a small number of labeled data and a large number of unlabeled data. Semi-supervised methods use a few number of labeled data and therefore they can dominate the problem of providing the high number of samples in supervised methods, but have a higher accuracy compared to unsupervised methods. Unlabeled data, combined with a small amount of labeled data, can result in a significant improvement on learning accuracy.

Acquisition of the labels for the data usually needs an expert (e.g., to transcribe an audio segment) or a physical experiment (for example, by determining the three-dimensional structure of a protein or by determining the presence of oil in a particular location). Providing a fully labeled training set may be infeasible due to the cost of this process. Therefore, semi-supervised learning methods can be useful with great practical significance. In this study, we use a co-training algorithm for tumor segmentation, which is described in the following section in details. Co-training algorithm has been introduced by Blum and Mitchell in 1998.[39] In the algorithm, there are two classifiers which are trained using a small set of labeled data using two views. Then, each classifier classifies unlabeled data, selects a limited unlabeled samples whose labels are more reliably predicted, and adds these samples to the training set.

Classifiers are retrained and the process is repeated.[40]

The mechanism of the algorithm is shown in [Figure 3]. At the beginning, the two classifiers are trained using limited labeled data and then make a decision for limited unlabeled data. In view 1, Learner 1 and in view 2, Learner 2 make a decision for constant unlabeled limited data independently. Then, the new labeled data are considered as secondary samples which are added to the primary training data for the other classifier as shown in [Figure 3]. In other words, when Learner 1 makes a decision for a constant unlabeled data set, these new labeled data are used as a secondary training data set for Learner 2 and vice versa. After this stage, the classifiers make a decision for the new test data set which are unlabeled.
Figure 3: The co-training algorithm

Click here to view



  the Proposed Method Top


The proposed method is shown in [Figure 4], in which the co-training algorithm is used for tumor segmentation in mammogram images. Two different expert radiologists have extracted the ROI. A sample of ROI is shown in [Figure 5].[41],[42],[43] At the first stage, two feature sets have been extracted for each pixel of training images, and then, the features have been reduced by the FDA method as shown in [Figure 4]. Then, an image is randomly selected as labeled training data and is given to a radiologist for manual segmentation.
Figure 4: Tumor segmentation procedure, according to the co-training algorithm

Click here to view
Figure 5: (a) An example of region of interest for a test image, (b) the output of Bayes method, (c) the output of support vector machine method, and (d) the output of co-training algorithm

Click here to view


In our proposed method, the two classifiers used in co-training method are support vector machine (SVM) classifier and Bayes classifier. A few labeled data are extracted to train the classifiers, while the dimensionality of the features is reduced by the FDA. Then, a set of unlabeled data is given to each classifier. The output of a classifier provides the secondary data set which is used for the other classifier. In the test step, each classifier makes a decision for all pixels in the test image and the accuracy of the classifier is calculated. The labels of the pixels are determined according to the classifier which has a higher accuracy. We have used two classifiers for the decision-making. A label corresponding to the output of the classifier with a higher accuracy is considered as a true label for each pixel.


  Experimental Results Top


In this study, we have used the MIAS data set which is available at http://peipa.essex.ac.uk/info/mias.html. The data set contains breast mammography images and their ground truth (GT) segmentation which have been manually extracted by a radiologist. In experiments, GT has been used as a reference for performance evaluations.

Here, we used two images for the training process: one labeled image and one unlabeled image. Then, 500 pixels (250 samples from the suspicious abnormal regions and 250 samples from the normal regions) are chosen from the labeled image. The same number of samples of the two classes has been used to train the classifiers. Furthermore, 6000 pixels have been selected from the nondeterministic labeled image. The output of the classifiers for these pixels is considered as the new sample. According to [Figure 3], new samples are added to train data. Hereby, we have a total of 500 labeled and 6000 nondeterministic labeled pixels for training of the classifier. In our experiments, 30 images have been used as a test set. To improve the performance, a pixel-based semi-supervised classification method has been used based on texture analysis.[1] In fact, according to this method, the results are reported according to all the pixels of the 30 images. [Figure 5]a shows a sample of ROI which is extracted from a test image. The output of Bayes classifier and SVM classifier is shown in [Figure 5]b and c, respectively. The output of co-training algorithm to segment the tumor from the ROI is shown in [Figure 5]d.

[Figure 6] shows receiver operating characteristic (ROC) curves for the Bayes classifier, SVM classifier, and co-training method. The performance of the classifiers has been reported using ROC analysis, which is based on statistical decision theory. It has been widely used for the assessment of clinical performance. We compare the performance of supervised learning for two classifiers and semi-supervised learning method proposed in this study. It is clear that the co-training method outperforms the other methods.
Figure 6: Receiver operating characteristic curve of compare performance supervised learning and semi-supervised learning method

Click here to view


The following measures have been used for the evaluation:

  1. Accuracy: this criterion has been applied to measure the similarity among assigned labels by the proposed method and the true labels:




  2. Positive predictive value (PPV): PPV is the percentage of correct prediction of tumor labels and correctly classified on the basis of the test result as positive (tumor):




  3. Negative predictive value (NPV): NPV is the percentage of correct prediction of nontumor labels and correctly classified on the basis of the test result as negative:




  4. Sensitivity: the percentage of tumor prediction recognized by the test is:




  5. Specificity: the percentage of nontumor prediction recognized by the test is:




The ROC has been used to evaluate the accuracy of the system. The area under the curve that is called Az is a measure of the success of the system. The output of the proposed co-training method is compared with watershed segmentation[40] and region-growing[4] approach for five images as an example in [Table 1].
Table 1: The comparison between the proposed co-training algorithm and watershed and also region growing segmentation test images

Click here to view


[Table 2] reports that when limited labeled data are used in the classifiers, the accuracy is 43.17% for SVM 87.52% for Bayes. The accuracy is 94.04% for the co-training algorithm when we use the same limited labeled data.
Table 2: Comparison of performance of the supervised learning and the semi-supervised learning methods

Click here to view


We can compare the output of supervised and semi-supervised learning methods. The average performance of the method for 30 images is shown in [Table 3] for SVM and Bayes as supervised classifiers and the proposed co-training method as a semi-supervised classifier. It can also be seen that the average performance for the co-training algorithm is higher than supervised methods.
Table 3: The performance of supervised and semi-supervised methods according to mean and standard deviation

Click here to view







  Conclusions Top


In this study, a semi-supervised learning method is proposed for tumor segmentation from mammogram images. It was shown that using the co-training algorithm for tumor segmentation has a higher accuracy than the supervised methods. The advantage of the proposed method is that it does not require a large number of data for classification and hence it is computationally tractable.

As a disadvantage of the method, it can be mentioned that the accuracy of the proposed method is low on low quality images like the other methods. The main reason is that there is no knowledge about the true labels of the secondary training data; therefore, the output of the classifier may be biased to one of the classes.

Future studies include using more than two classifiers in co-training. Probability rules can also be used for label prediction and therefore it may improve the performance of the co-training method. Additional knowledge about the secondary training data can also be used to prevent the classifiers to be biased. It can improve the accuracy of the co-training method which can be studied as our future study.

Financial support and sponsorship

None.

Conflicts of interest

There are no conflicts of interest.







Hanie Azary received B.Sc. degree in computer engineering from Payam Noor University of Tehran in 2011 and received the M.Sc. degree in Artificial Intelligence from Iran University of Science and Technology, Tehran, Iran in 2015. Her research interests include Medical Image Processing, Machine vision and Pattern recognition.

Email: hanie.azary@yahoo.com



Monireh Abdoos received her Ph.D. in Artificial Intelligence from Iran University of Science and Technology in 2013. She is currently an assistant professor at Faculty of Computer Science and Engineering in Shahid Beheshti University, Tehran, Iran. Her research interests include: Multi-agent Systems, Machine learning and Intelligent Transportation Systems.

Email: m_abdoos@sbu.ac.ir



 
  References Top

1.
Azmi R, Norozi N, Anbiaee R, Salehi L, Amirzadi A. IMPST: A New interactive self-training approach to segmentation suspicious lesions in breast MRI. J Med Signals Sens 2011;1:138-48.  Back to cited text no. 1
[PUBMED]  [Full text]  
2.
Nahid AA, Kong Y. Involvement of machine learning for breast cancer image classification: A survey. Comput Math Methods Med 2017;2017:3781951.  Back to cited text no. 2
    
3.
Cordeiro FR, Saki F, Silva-Filho AG, Pinheiro Dos Santos W. Analysis of supervised and semi-supervised GrowCut applied to segmentation of masses in mammography images. Comput Methods Biomech Biomed Eng 2017;5:297-315.  Back to cited text no. 3
    
4.
Cordeiro FR, Santos WP, Silva-Filho AG. A semi-supervised fuzzy GrowCut algorithm to segment and classify regions of interest of mammographic images. Expert Syst Appl 2016; 65:116-26.  Back to cited text no. 4
    
5.
Oliver A, Freixenet J, Martí J, Pérez E, Pont J, Denton ER, et al. A review of automatic mass detection and segmentation in mammographic images. Med Image Anal 2010;14:87-110.  Back to cited text no. 5
    
6.
Behrens S, Laue H, Althaus M, Boehler T, Kuemmerlen B, Hahn HK. Computer assistance for MR based diagnosis of breast cancer: Present and future challenges. Comput Med Imaging Graph 2007;31:236-47.  Back to cited text no. 6
    
7.
Dabass J, Arora S, Vig R, Hanmandlu M. Segmentation Techniques for Breast Cancer Imaging Modalities-A Review. 9th International Conference on Cloud Computing, Data Science and Engineering; 2019.  Back to cited text no. 7
    
8.
Eltonsy NH, Tourassi GD, Elmaghraby AS. A concentric morphology model for the detection of masses in mammography. IEEE Trans Med Imaging 2007;26:880-9.  Back to cited text no. 8
    
9.
Arefan D, Talebpour A, Aghamiri M. Diagnostic of Mass and Abnormal Areas in Mammography Images Using Chebyshev Moments. 1st MEFOMP International Conference of Medical Physics; 2011.  Back to cited text no. 9
    
10.
Shi J, Sahiner B, Chan HP, Ge J, Hadjiiski L, Helvie MA. Characterization of mammographic masses based on level set segmentation with new image features and patient information. Med Phys 2008;35:280-90.  Back to cited text no. 10
    
11.
Berber T, Alpkocak A, Balci P, Dicle O. Breast mass contour segmentation algorithm in digital mammograms. Comput Methods Programs Biomed 2013;110:150-9.  Back to cited text no. 11
    
12.
Meenalosinin S. Segmentation of cancer cells in mammogram using region growing method and Gabor features. Int J Eng Res App 2012;2:1055-62.  Back to cited text no. 12
    
13.
Yuvraj K. Automatic Mammographic Mass Segmentation Based on Region Growing Technique. 3rd International Conference on Electronics, Biomedical Engineering and its Applications; April, 2013.  Back to cited text no. 13
    
14.
Görgel P, Sertbas A, Ucan ON. Mammographical mass detection and classification using local seed region growing-spherical wavelet transform (LSRG-SWT) hybrid scheme. Comput Biol Med 2013;43:765-74.  Back to cited text no. 14
    
15.
Varughese LS, Anitha J. A study of region based segmentation methods for mammograms. International Journal of Research in Engineering and Technology 2013;2:421-5.  Back to cited text no. 15
    
16.
Kozegar E, Soryani M, Behnam H, Salamati M, Tan T. Mass segmentation in automated 3-D breast ultrasound using adaptive region growing and supervised edge-based deformable model. IEEE Trans Med Imaging 2018;37:918-28.  Back to cited text no. 16
    
17.
Mencattini A, Lojacono L. Mammographic images enhancement and denoising for breast cancer detection using dyadic wavelet processing. IEEE Trans Instrum Meas 2008;57:1422-30.  Back to cited text no. 17
    
18.
Narayan Panda R, Ketan Panigrahi B, Ranjan Patro M. Feature extraction for classification of microcalcifications and mass lesions in mammograms. Int J Comput Sci Net Secur 2009;9:255.  Back to cited text no. 18
    
19.
Vincent L. Watersheds in digital spaces: An efficient algorithm based on immersion simulation. IEEE Trans Pattern Anal Mach Intell 1991;13:583-98.  Back to cited text no. 19
    
20.
Herredsvela J, Engan K, Gulsrud TO, Skretting K. Detection of Masses in Mammograms by Watershed Segmentation and Sparse Representations Using Learned Dictionaries. Proceeding of NORSIG; 2005.  Back to cited text no. 20
    
21.
Sharma J, Sharma S. Mammogram image segmentation using watershed. Int J Info Tech and Knowledge Management 2011;4:423-5.  Back to cited text no. 21
    
22.
Bandyopadhyay S, Maitra I. Digital imaging in mammography towards detection and analysis of human breast cancer, IJCA Special Issue on Computer Aided Soft Computing Techniques for Imaging and Biomedical Applications; 2010. p. 29-34.  Back to cited text no. 22
    
23.
Kanta Maitra I, Nag S, Kumar Bandyopadhyay S. Identification of abnormal masses in digital mammography images. Int J Comput Graph 2011;2:17-30.  Back to cited text no. 23
    
24.
Mohd Khuzi A, Besar R, Wan Zaki W, Ahmad N. Identification of masses in digital mammogram using gray level co-occurrence matrices. Biomed Imaging Interv J 2009;5:e17.  Back to cited text no. 24
    
25.
Tao Y, Lo SC, Freedman MT, Makariou E, Xuan J. Multilevel learning-based segmentation of ill-defined and spiculated masses in mammograms. Med Phys 2010;37:5993-6002.  Back to cited text no. 25
    
26.
Song E, Jiang L, Jin R, Zhang L, Yuan Y, Li Q, et al. Breast mass segmentation in mammography using plane fitting and dynamic programming. Acad Radiol 2009;16:826-35.  Back to cited text no. 26
    
27.
Song E, Xu S, Xu X, Zeng J, Lan Y, Zhang S, et al. Hybrid segmentation of mass in mammograms using template matching and dynamic programming. Acad Radiol 2010;17:1414-24.  Back to cited text no. 27
    
28.
Shi P, Zhong J, Rampun A, Wang H. A hierarchical pipeline for breast boundary segmentation and calcification detection in mammograms. Comput Biol Med 2018;96:178-88.  Back to cited text no. 28
    
29.
Kamil MY, Salih AM. Mammography Images Segmentation via Fuzzy C-mean and K-mean. International Journal of Intelligent Engineering and Systems 2019;12:22-9.  Back to cited text no. 29
    
30.
Lucht R, Delorme S, Brix G. Neural network-based segmentation of dynamic MR mammographic images. Magn Reson Imaging 2002;20:147-54.  Back to cited text no. 30
    
31.
Azmi R, Pishgoo B, Norozi N, Yeganeh S. Ensemble semi-supervised frame-work for brain magnetic resonance imaging tissue segmentation. J Med Signals Sens 2013;3:94-106.  Back to cited text no. 31
[PUBMED]  [Full text]  
32.
Saheb Basha S, Satya Prasad K. 'Automatic detection of breast cancer mass in mammograms using morphological operator and fuzzy c-means clustering. J Theor Appl Inf Technol 2009;5:704-9.  Back to cited text no. 32
    
33.
Li HD, Kallergi M, Clarke LP, Jain VK, Clark RA. Markov random field for tumor detection in digital mammography. IEEE Trans Med Imaging 1995;14:565-76.  Back to cited text no. 33
    
34.
Gibbs P, Turnbull LW. Textural analysis of contrast-enhanced MR images of the breast. Magn Reson Med 2003;50:92-8.  Back to cited text no. 34
    
35.
Tahmasbi A, Saki F, Shokouhi SB. Classification of benign and malignant masses based on zernike moments. Comput Biol Med 2011;41:726-35.  Back to cited text no. 35
    
36.
Amirzadi A, Azmi R. Introducing kernel based morphology as an enhancement method for mass classification on mammography. J Med Signals Sens 2013;3:117-26.  Back to cited text no. 36
[PUBMED]  [Full text]  
37.
Ilin A, Raiko T. Practical approaches to principal component analysis in the presence of missing values. J Mach Learn Res 2010;11:1957-2000.  Back to cited text no. 37
    
38.
Tharwat A, Gaber T, Ibrahim A, Hassanien AE. Linear discriminant analysis: A detailed tutorial. AI communications. 2017;30:169-90.  Back to cited text no. 38
    
39.
Blum A, Tom M. Combining labeled and unlabeled data with co-training. In: Proceedings of the Eleventh Annual Conference on Computational Learning Theory. New York, N.Y.: ACM; 1998. p. 92-100.  Back to cited text no. 39
    
40.
Dubey RB, Hanmandlu M, Gupta SK. A comparison of two methods for the segmentation of masses in the digital mammograms. Comput Med Imaging Graph 2010;34:185-91.  Back to cited text no. 40
    
41.
Saki F, Tahmasbi A, Soltanian-Zadeh H, Shokouhi SB. Fast opposite weight learning rules with application in breast cancer diagnosis. Comput Biol Med 2013;43:32-41.  Back to cited text no. 41
    
42.
Tahmasbi A, Saki F, Shokouhi SB. An Effective Breast Mass Diagnosis System Using Zernike Moment. Iran: Proceeding of 17th Iranian Conference Biomedical Engineering; 2010. p. 1-4.  Back to cited text no. 42
    
43.
Tahmasbi A, Saki F, Aghapanah H, Shokouhi SB. A Novel Breast Mass Diagnosis System Based on Zernike Moments as Shape and Density Descriptor. Tehran Iran: Proceeding of IEEE 18th Iran Conference Biomedical Engineering; 2011. p. 100-4.  Back to cited text no. 43
    


    Figures

  [Figure 1], [Figure 2], [Figure 3], [Figure 4], [Figure 5], [Figure 6]
 
 
    Tables

  [Table 1], [Table 2], [Table 3]



 

Top
 
 
  Search
 
Similar in PUBMED
   Search Pubmed for
   Search in Google Scholar for
 Related articles
Access Statistics
Email Alert *
Add to My List *
* Registration required (free)

 
  In this article
   Abstract
  Introduction
  Basics
  the Proposed Method
  Experimental Results
  Conclusions
  Biographies
   References
   Article Figures
   Article Tables

 Article Access Statistics
    Viewed114    
    Printed6    
    Emailed0    
    PDF Downloaded34    
    Comments [Add]    

Recommend this journal