

ORIGINAL ARTICLE 

Year : 2015  Volume
: 5
 Issue : 1  Page : 1220 

Chaotic particle swarm optimization with mutation for classification
Zahra Assarzadeh^{1}, Ahmad Reza NaghshNilchi^{2}
^{1} M.Sc. Student, Department of Artificial Intelligence, Faculty of Computer Engineering, University of Isfahan, Isfahan, Iran and The lecturer of Payam Higher Education Institution of Golpaygan, Isfahan, Iran ^{2} Department of Artificial Intelligence, Faculty of Computer Engineering, University of Isfahan, Isfahan, Iran
Date of Submission  17Nov2013 
Date of Acceptance  01Dec2014 
Date of Web Publication  18Sep2019 
Correspondence Address: Zahra Assarzadeh M.Sc. Student, Department of Artificial Intelligence, Faculty of Computer Engineering, University of Isfahan, Isfahan, Iran and The lecturer of Payam higher education institution of Golpaygan Iran
Source of Support: None, Conflict of Interest: None  9 
DOI: 10.4103/22287477.150380
In this paper, a chaotic particle swarm optimization with mutationbased classifier particle swarm optimization is proposed to classify patterns of different classes in the feature space. The introduced mutation operators and chaotic sequences allows us to overcome the problem of early convergence into a local minima associated with particle swarm optimization algorithms. That is, the mutation operator sharpens the convergence and it tunes the best possible solution. Furthermore, to remove the irrelevant data and reduce the dimensionality of medical datasets, a feature selection approach using binary version of the proposed particle swarm optimization is introduced. In order to demonstrate the effectiveness of our proposed classifier, mutationbased classifier particle swarm optimization , it is checked out with three sets of data classifications namely, Wisconsin diagnostic breast cancer, Wisconsin breast cancer and heartstatlog, with different feature vector dimensions. The proposed algorithm is compared with different classifier algorithms including knearest neighbor, as a conventional classifier, particle swarmclassifier, genetic algorithm, and Imperialist competitive algorithmclassifier, as more sophisticated ones. The performance of each classifier was evaluated by calculating the accuracy, sensitivity, specificity and Matthews's correlation coefficient. The experimental results show that the mutationbased classifier particle swarm optimization unequivocally performs better than all the compared algorithms.
Keywords: Decision hyperplanes, medical database classification, particle swarm optimization, pattern recognition
How to cite this article: Assarzadeh Z, NaghshNilchi AR. Chaotic particle swarm optimization with mutation for classification. J Med Signals Sens 2015;5:1220 
Introduction   
Classification is a supervised learning technique that using labeled data samples generates a model (classifier). The resulted model classifies new data samples into different predefined groups or classes. In other word, in classification problem objects assigned to one of several predefined categories. From the mathematical point of view, classification can be defined as a mapping from the input feature space into a set of labels. In recent years, researchers have developed many classification techniques including intelligent particle swarm (PS)classifier, ^{[1]} binary classifiers, ^{[2],[3],[4]} decision tree classifiers, ^{[5],[6]} artificial neural network classifiers, ^{[7],[8],[9],[10]} Bayesian classifiers, ^{[11]} support vector machine classifiers, ^{[12],[13]} and instance (prototype) based classifiers. ^{[14]}
This paper presents a chaotic particle swarm optimization (PSO) with mutation based classifier (MCPSO) based approach for classifier design. PSO algorithm is a powerful evolutionary algorithm inspired by the social behavior of bird flocks and fish schools. ^{[15]} PSO is one of the most promising optimization algorithms that used for a wide range of complex engineering optimization problems. Algorithmic simplicity and fast convergence of PSO are the most attractive features of this metaheuristic algorithm. However, when PSO applied to strongly multimodal optimization problems, it tends to suffer from premature convergence. ^{[16],[17]} To overcome the premature convergence and enhances the optimization performance, chaotic PSO with mutation is proposed here. The presence of mutation operator helps to sharpen the convergence and tunes to the best solution.
Since MCPS is a simple and effective search technique in high dimensional spaces, with a little prior information a MCPSOclassifier has the potential of classifying different high dimensional feature spaces classes, successfully. With searches in solution space, the MCPSclassifier moves toward optima hyperplanes in such manner that the misclassified points are minimized.
In general, classification problems involve a number of features. All of these features have not equally important for a specific task. Better performance may be achieved by discarding redundant or irrelevant features. Therefore, using the smallest number of features, the classification process can be fast and accurate. Using feature selection this objective can be achieved. Feature selection strategies used to explore the effect of irrelevant attributes on the performance of classifier systems. ^{[18],[19]}
The goal of this study is to increase the classification accuracy rate by employing an approach based on proposed algorithm. To do this, we use two types of MCPSO: The continuousvalued version and binary version. The continuousvalued version is used to optimize the best model parameters, while the binary version is used to search the optimal feature subset. The developed MCPSO approach not only tunes the parameter values of model but also identifies a subset of features that maximize the classification accuracy rate.
For comparing experimental results three common benchmark problems in medical database classification were considered. The Wisconsin diagnostic breast cancer (WDBC), Wisconsin breast cancer (WBC) and heartstatlog data classifications are common problems in pattern recognition researches.
The performance of MCPSclassifier has been compared with knearest neighbor (kNN), PS basedclassifiers, genetic algorithm basedclassifier (GAclassifier) and imperialist competitive algorithm basedclassifier (ICAclassifier), to show that the average of recognition rates of designed MCPSclassifier are better than to those of the traditional and new classifiers. Some illustrative velocity and position, respectivelys have been included for comparing convergence speed MCPSclassifier and other mentioned metaheuristic algorithms basedclassifier.
In this paper, section two explains standard and improved realbinary PSO algorithm. MCPSclassifier is described in the next section. Section four considers implementation of the classifier and experimental results on three aforesaid pattern recognition problems. Finally, conclusion and discussion is presented in section five.
Particle swarm optimization algorithm   
Standard RealBinary Particle Swarm Optimization
Dr. Eberhart and Dr. Kennedy in 1995 developed an evolutionary computation technique, named PSO, which inspired by social behavior of bird flocking or fish schooling. ^{[15]} With a population of random solutions, the algorithm is initialized and searches for optima by updating generations. Unlike GA, there are not evolution operators such as crossover and mutation in PSO. A PS can consider as a population of individuals which each individual contain the appropriate amount of features to place it in a swarm problem space. The individuals are arranged in neighborhoods so that they can share information. In PSO, each single solution is called as "particle" and all the particles save fitness values, which these values evaluated by the objective function to be optimized. Particles have velocities, which direct the flying of the particles. With following the current optimum particles, the particles are flown through the problem space. Each particle is updated by two "best" values in each iteration. The first one is the best position (fitness) it has achieved so far which called Pbest. Another one, named Gbest, is the overall best value obtained so far by any particle in the population. After finding the two best values, with Eq. (1) and (2) the particle updates its velocity and position, respectively.
(1)
(2)
Where w is the inertia weight, V_{id} is the particle velocity, X_{id} is the current particle position, rand is a random number between (0, 1) and c _{1} , c _{2} are learning or acceleration factors. The velocities of particles on each dimension are clamped to a maximum velocity V_{max} .
In PSO, the key factors affecting the convergence behavior are: The parameters w, c _{1} and c _{2} . ^{[20],[21]} The balance between the global exploration and the local search ability control by the inertia weight, in which a large inertia weight favors the global search and a small inertia weight favors the local search. Hence, usually an inertia weight that linearly decreases from 0.9 to 0.4 throughout the search process is used. ^{[22]}
In order to extend PSO algorithm to tackle binary problems effectively, Kennedy and Eberhart adapted the continuous PSO algorithm to binary spaces. ^{[23]} In binary version of PSO, the position of the particle has two values 0 or 1, and the velocity of the particle represents the probability that a bit (position) takes on 0 or 1. In based on particle swarm optimization (BPSO) the Eq. (1) remains unchanged, but the Eq. (2) is redefined by Eq. (3):
(3)
where S(.) is the sigmoid function, which is used to transform the velocity to a probability and defined as follow:
(4)
and rand() is a random number selected from the uniform distribution over (0,1).
Chaotic Particle Swarm Optimization
Simulation dynamic behavior of nonlinear systems called chaos. It has raised enormous interest in different fields of sciences such as synchronization, chaos control, optimization theory, pattern recognition and so on. ^{[24]} In optimization algorithms based on the chaos theory, the methods using chaotic variables instead of random variables are called chaotic optimization algorithm (COA). COA is a stochastic search methodology that differs from any of the existing swarm intelligence methods and evolutionary computation. COA can carry out overall searches at higher speeds than stochastic searches that depend on probabilities. ^{[25]}
There are several different chaotic sequences which the most commonly used such chaotic sequences are logistic maps that are considered in this paper. Logistic maps are frequently used chaotic behavior maps and chaotic sequences can be quickly generated and easily stored. For this reason, there is no need for storage of long sequences. ^{[26]} In this study, we substitute the random parameters in PSO with sequences generated by the logistic map. The parameters random are modified by the logistic map based on the following equation:
(5)
In Eq. (5), k =4 and for each independent run, Cr(0) is generated randomly, which Cr(0) not being equal to {0, 0.25, 0.5, 0.75, 1}. Behavior of Cr(t) is controlled by the driving parameter k of the logistic map (as t goes to infinity). ^{[27]} Considering to the above descriptions the velocity update equation for chaotic particle swarm optimization can be formulated as:
(6)
In Eq. (6), C _{r} is a function based on the results of the logistic map with values between 0.0 and 1.0.
Proposed Mut Particle Swarm Optimization Algorithm
Velocity and position updating are the two major operations in standard PSO. These operators use to update the search space repeatedly and may cause difficulties in certain situation that leads to get stuck in the local optima. The proposed PSO incorporates the some mutation operators from GA to overcome this difficulty. Mutation operator with generating new material into the population, thereby allows faster convergence and prevents trapping to a local optimal value.
One of the most widely used mutation operators in real coded GAs is Michalewicz's nonuniform mutation. ^{[28]} The muted point from a point is created as follows:
(7)
where t is current generation number and r is a random number between 0 and 1 with uniform distribution. and show lower and upper bounds of the i^{th} component of the decision vector, respectively. The function D (t,y) given below takes value in the interval (0,y).
(8)
where T is the maximum number of generations, u is a random number in the interval (0,1) with uniform distribution and b is a parameter, determining the strength of the mutation operator. In the initial generations with emphasize to exploration nonuniform mutation tends to search the space uniformly and for tuning the solution in the later generations it tends to search the space locally, that is, closer to its descendants. ^{[28]}
Mutation operator just described, provide the ability to "fly" to the new search area and as that in GA information can be changed in the individuals. In other word, the presence mutation operator makes the proposed MutPSO more exploitative search mechanism than standard PSO and consequently MutPSO can finds better optima more consistently.
In BPSO to explore untried areas of the search space we used the following mutation operator as suggested in: ^{[29]}
(9)
where r_{mut} is the probability of random mutation, N is the total number of particles and N _{t} is the total (initial) number of features of the dataset. After updating the particle position as in (1) and (3), each of the bits of the position vector is mutated with a probability r_{mut} .
Chaotic particle swarm optimization with mutation based classifier   
In this paper, the mentioned mutation operator and chaotic sequences considered simultaneously to improving performance of PSO. A chaotic PSO with mutation basedclassifier (MCPSclassifier) has three major parts including decision hyperplanes, fitness function definition, and its structure.
Decision Hyperplanes
A general hyperplane is in the following form:
(10)
where and are called weight vector and the augmented feature respectively and n is the feature space dimension.
The MCPSclassifier must find in solution space in such manner that the misclassified points are minimized, where H is the necessary number of decision hyperplanes.
Fitness Function Definition
This study developed an improved PSO approach for parameter determination and feature selection in an evolutionary classifier. For each hyperplane, n + 1 decision variables are required. For feature selection, n decision variables must be adopted. The feature selection is Boolean that "1" represents the feature is selected, and "0" indicates feature is not selected.
In this study, classification accuracy and the number of selected features are two measures used to design a fitness function. We defined fitness function for an individual such that, a high fitness value achieved with high classification accuracy and small number of features. Thus, fitness function is defined as follow:
(11)
where, w _{F} is the weight for the number of selected features (0<<i>w _{F}<1), f _{j} is the value of feature mask "1" represents that feature j is selected and "0" represents that feature j is not selected, and n _{F} is the total number of features. The classification accuracy is defined as follow:
(12)
To fully characterize the classifier performance, additional information from the confusion matrix is considered too. This information is necessary in the classification of data with imbalanced class distribution, where even a total error in predicting a rare class, would have only a small impact on the total accuracy%. Therefore, following measures is also considered: ^{[30]} [INLINE:12]
(13)
(14)
(15)
where TP is the number of the true positives, TN is the number of true negatives, FP is the number of the false positives and FN is the number of false negatives. Sensitivity measures the proportion of actual positives which are correctly identified. Specificity measures the proportion of negatives, which are correctly identified. MCC is the MCC, ^{[28]} which reflects both the sensitivity and specificity of the prediction algorithm.
The Structure of Mutation BasedClassifier Particle SwarmClassifier
According to the above descriptions, designing a MCPSclassifier has the pseudocode in [Figure 1]. In a MCPSclassifier each particle is selected randomly from the solution space and has the form of where is the weight vector of i^{th} hyperplane, and H is the predefined number of hyperplanes. Fitness function can be defined as Eq. (11). Default maximum number of iterations or the best fitness value can be considered as termination condition. After enough iteration the particles converges to a solution and the decision hyperplanes with minimum misclassified training points is achieved.
Implementation and results   
Datasets
Three pattern recognition problems with different augmented feature vectors dimensions (10, 14 and 31) were used to show the performance of the MCPSclassifier. These datasets obtained from University of California at Irvine machine learning repository (http://mlearn.ics.edu//MLRepository.html). A description of the data sets is given here:
Wisconsin diagnostic breast cancer
Breast cancer is the first current cancer and is the second largest cause of cancer deaths among women. WDBC dataset is arrived from Dr. Woldberg's clinical cases reports and contains 569 instances. WDBC has 30 inputs that are continuous and classify a tumor as either benign or malignant.
Wisconsin breast cancer dataset
This breast cancer data set was created by Wolberg from the University of Wisconsin. It contains 699 instances characterized by nine features: (1) Clump thickness, (2) uniformity of cell size, (3) uniformity of cell shape, (4) marginal adhesion, (5) single epithelial cell size, (6) bare nuclei, (7) bland chromatin, (8) normal nucleoli, and (9) mitoses, which are used to predict benign or malignant growths. In this data set, 241 (34.5%) instances are malignant and 458 (65.5%) instances are benign.
Heartstatlog
The data set is based on data from the Clevel and Clinic Foundation and it contains 270 instances belonging to two classes: The presence or absence of heart disease. It is described by13features (age, sex, chest, resting blood pressure, serum hole sterol, fasting blood sugar, resting electrocardiographic, maximum heart rate, exercise in duce angina, old peak, slope, number of major vessels and thal).
Partition of Datasets
A wellknown tenfold cross validation procedure is used to supply the dataset. Each dataset is partitioned in to ten data subsets and MCPSclassifier and other classifier are executed once for each partition. In each run a different partition is used as testing set and the remaining 9 are grouped together to build training set. The training set is used to train the model for good learning capability, in which the generalization capability of the proposed classifier is evaluated by the testing set.
Comparison with Promising Methods
The performance of proposed classifier is compared with the performance of kNN classifier, PSclassifier, GAclassifier and ICAclassifier to show that the average recognition rates of the designed classifiers is better than kNN as a conventional classifier and PSclassifier, GA basedclassifiers and ICAclassifier as new classifiers. In kNN classifier k is considered equal to, where T is the number of training samples.
In PSclassifier, GAclassifier, ICAclassifier and proposed classifiers for each problem the initial population size is set to 20 and the termination condition is considered as a maximum value of number of function evaluation, which is set to an experimentally obtained value of 10000. In PSclassifier to effectively balance the local and global search abilities of the swarm, the inertia weight is decreased linearly from 0.9 to 0.4 throughout the search process. ^{[22]} The learning factors c _{1} and c _{2} are set equal to 2. Roulettewheel selection, uniform crossover with crossover probability P _{c} (P _{c} =0.5) and uniform mutation with mutation probability P _{m} (P _{m} =0.3) for GAclassifier is considered. In simulation of ICAclassifier, revolution rate, damp ratio and uniting threshold respectively are set to 0.2, 0.99 and 0.02. Furthermore, the number of imperialists and the colonies are considered 4 and 16.
Performance Comparisons
Performance of proposed MCPSclassifier, compared with kNN classifier, PSclassifier, GAclassifier and ICAclassifier and all of them are tested on the data sets described earlier.
All algorithms are coded and executed on the same computer in MATLAB 7.12. [Table 1] [Table 2] [Table 3] present the results corresponding to WDBC data, WBC data and heartstatlog data classifications, respectively. These tables show tenfold cross validation results of the all studied classifiers for each of the three data sets. In all the datasets, the performance metrics of the 10 runs are averaged and report. Testing accuracies, standard deviations of testing accuracy, sensitivity, specificity, MCC are shown in these tables. These values demonstrate the ability of proposed classifier, in comparison with other mentioned classifiers. As these tables show testing accuracy and MCC of MCPSclassifier is better than other classifiers in every three datasets and it can be seen MCPSclassifier give reasonably good results in these dataset. These experiments have been done using 2 hyperplanes for all the datasets (H = 2).
Statistical paired ttest using accuracy is also conducted for all data sets. Specifically, paired ttest between MCPSclassifier and each one of the other methods is conducted. The results of ttest at the confidence level of 5% between the MCPSclassifier and each of the other algorithms is shown in [Table 1] [Table 2] [Table 3]. "+" indicate that the proposed algorithm is significantly better than the compared algorithm. "≈" indicates that the difference is not statistically significant.
From results of the studied classifiers (in without feature selection manner) following points can be seen:
For the WDBC dataset, MCPSclassifier is the best classifier with 92.6071% means testing accuracy, kNN classifier is the second with 92.4429%, and GAclassifier is the third with 91.4286%. PSclassifier and ICAclassifier have 90.0893% and 85.3393% mean testing accuracy, respectively.
In WBC dataset MCPSclassifier outperforms the other classifiers with 95.2100% mean testing accuracy and 0.7100% standard deviation. kNN classifiers and GAclassifier are the second and third classifiers with 95.15% and 92.0448% means. The latter classifiers are PSclassifier and ICAclassifier with 88.5224% and 86.8806% mean testing accuracy, respectively.
For the heart dataset, MCPSclassifier outperforms the other classifiers with 74.1111% mean testing accuracy and 1.3266% standard deviations. Other classifiers give lower testing accuracies: GAclassifier 72.3333%, PSclassifier 68.5556%, kNN classifier 65.9259%and ICAclassifier 65.2963%.
In all dataset, the highest accuracy is reported when feature selection is employed.
From the results of the WDBC dataset classification, it can be seen that the best result is achieved using MCPSclassifier with feature selection with an accuracy of 92.7857%, sensitivity of 0.9472, specificity of 0.9179 and MCC of 0.8457. These results are achieved with about 15 features, compared with the 30 features of the original dataset.
In WBC dataset the best accuracy of 95.4179% is reached using MCPSclassifier with feature selection. In this dataset, the best accuracy is seen with <5 features, compared with the 9 features of the original dataset.
As can be seen in [Table 3], when the dataset is classified using MCPSclassifier with the original features, classification accuracy of 74.1111%, sensitivity of 0.7520, specificity of 0.7254 and a MCC of 0.4729 are obtained. All the results were improved using feature selection, the accuracy increased from 74.1111% to 75.8889%, the sensitivity increased from 0.7520 to 0.7678, the specificity increased from 0.7254 to 0.7461, and the MCC value increased from 0.4729 to 0.5093.
The results of classification mentioned medical dataset indicate that some redundancy features does exited in the whole feature set, and feature selection is an important and necessary block in model construction.
[Figure 2] shows the average rate of recognition (%) with respect to the number of function evaluation for (a) WDBC data classification, (b) WBC data classification, and (c) heart data classification. In [Figure 2] for fair comparison between proposed PSO and standard PSO we consider number of function evaluation instead of number of iteration.  Figure 2: The average rate of recognition (%) with respect to the number of function evaluation for (a) WDBC data classification, (b) WBC data classification and (c) Heart data classification
Click here to view 
[Figure 2] demonstrates the fact that MCPSclassifier finds a proper trajectory for converging to the solutions with lower number of function evaluation and this is the result of using the mutation operator and chaotic sequences.
Conclusion   
This paper presents mutation operators and chaotic sequences to overcome the premature convergence and enhance the optimization performance of PSO. Effectiveness and powerfulness of MCPSO as a global search metaheuristic algorithm, especially in high dimensional spaces, were motivated us to design swarm intelligence basedclassifier. Due to this, the MCPSO is used to obtain the decision hyperplanes in the feature space. The experimental show that the performance of the MCPSclassifier better than those of the kNN classifier, PSclassifier, GAclassifier and ICAclassifier.
Our results also show that the propose classifier works well for medical dataset recognition. In these cases, feature selection help to reduce the amount of unnecessary, irrelevant and redundant features in datasets and improves the classification accuracy with less computational efforts.
References   
1.  Zahiri SH, Seyedin SA. Swarm intelligence based classifiers. J Franklin Inst 2007;344:36276. 
2.  Mitchell T. Machine Learning. ???: McGrawHill; 1997. 
3.  Bhavani SD, Rani TS, Bapi RS. Feature selection using correlation fractal dimension: Issues and applications in binary classification problems. Appl Soft Comput 2008;8:55563. 
4.  GarcaPedrajas N, OrtizBoyer D. An empirical study of binary classifier fusion methods for multiclass classification. Inf Fusion 2011;12:11130. 
5.  Polat K, Günes S. A novel hybrid intelligent method based on C4.5 decision tree classifier and oneagainstall approach for multiclass classification problems. Expert Syst Appl 2009;36:158792. 
6.  Kurzynski MW. The optimal strategy of a tree classifier. Pattern Recognit 1983;16:817. 
7.  De Silva CR, Ranganath S, De Silva LC. Cloud basis function neural network: A modified RBF network architecture for holistic facial expression recognition. Pattern Recognit 2008;41:124153. 
8.  Wu JY. MIMO CMAC neural network classifier for solving classification problems. Appl Soft Comput 2011;11:232633. 
9.  NaghshNilchi AR, Aghashahi M. Classification of epileptic states using rootmusic and MLPNN. 2009. 
10.  NaghshNilchi AR, Kadkhodamohammadi AR. Cardiac arrhythmias classification method based on music, morphological descriptors, and neural network. EURASIP J Adv Signal Process 2008;2008:202. 
11.  Duda RO, Hart PE, Stork DG. Pattern Classification. 2 ^{nd} ed. New York: Wiley; 2001. 
12.  Liu Y, You Z, Cao L. A novel and quick SVMbased multiclass classifier. Pattern Recognit 2006;39:225864. 
13.  Qian H, Mao Y, Xiang W, Wang Z. Recognition of human activities using SVM multiclass classifier. Pattern Recognit Lett 2010;31:10011. 
14.  Hastie T, Tibshirani R, Friedman J. The Elements of Statistical Learning: Data Mining, Inference, and Prediction. 2 ^{nd} ed. New York: Springer Verlag; 2009. 
15.  Kennedy J, Eberhart R. Particle swarm optimization. In: Proc. of IEEE Int. Conf. Neural Networks; 1995. p. 19428. 
16.  Angeline PJ. Evolutionary optimization versus particle swarm optimization: Philosophy and performance differences. In: Lecture Notes in Computer Science. Berlin: Springer; 1998. p. 60110. 
17.  Jiang Y, Hu T, Huang CC, Wu X. An improved particle swarm optimization algorithm. Appl Math Comput 2007;193:2319. 
18.  Acir N, Zdamar OO, Guzelis C. Automatic classification of auditory brainstem responses using SVMbased feature selection algorithm for threshold detection. Eng Appl Artif Intell 2006;19:20918. 
19.  Valentini G, Muselli M, Ruffino F. Cancer recognition with bagged ensembles of support vector machines. Neuro Comput 2004;56:4616. 
20.  Trelea IC. The particle swarm optimization algorithm: Convergence analysis and parameter selection. Inf Process Lett 2003;85:31725. 
21.  Naka S, Genji T, Yura T, Fukuyama Y. A hybrid particle swarm optimization for distribution state estimation. IEEE Trans Power Syst 2003;18:608. 
22.  Shi Y, Eberhart RC. Empirical study of particle swarm optimization. In: Proceedings of Congress on Evolutionary Computation. Washington, DC; 2002. p. 19459. 
23.  Kennedy J. Eberhart RC. A discrete binary version of the particle swarm algorithm. In: Proceedings of the World Multi Conference on Systemic, Cybernetics and Informatics. Piscataway, NJ;1997. p. 41048. 
24.  He Y, Zhou J, Xiang X, Chen H, Qin H. Comparison of different chaotic maps in particle swarm optimization algorithm for longterm cascaded hydroelectric system scheduling. Chaos Solitons Fractals 2009;42:316976. 
25.  Coelho L, Mariani V. Use of chaotic sequences in a biologically inspired algorithm for engineering design optimization. Expert Syst Appl 2008;34:190513. 
26.  Gao H, Zhang Y, Liang S, Li D. A new chaotic algorithm for image encryption. Chaos Solitons Fractals 2006;29:3939. 
27.  Kuo D. Chaos and its computing paradigm. IEEE Potentials Mag 2005;24:135. 
28.  Michalewicz Z. Genetic Algorithms + Data Structures = Evolution Programs. New York: SpringerVerlag; 1992. 
29.  Lee S, Soak S, Oh S, Pedrycz W, Jeon M. Modified binary particle swarm optimization. Prog Nat Sci 2008;18:11616. 
30.  Matthews BW. Comparison of the predicted and observed secondary structure of T4 phage lysozyme. Biochim Biophys Acta 1975;405:44251. [ PUBMED] 
Authors   
Zahra Assarzadeh received B.Sc. degree in computer engineering from Mashhad University, Mashhad, Iran, in 2007, and she is
currently a M.Sc. student at the Department of Artificial Intelligence Engineering, University of Isfahan, Isfahan, Iran. Her research interests include image processing, neural networks, pattern recognition, and its applications in medicine.
Ahmad Reza NaghshNilchi is an associate professor at the University of Isfahan, Iran. He received his B.S., M.S. and PhD, all in
electrical engineering from the University of Utah, Salt Lake City, Utah, USA. His research interests include medical image and signal processing as well as intensive computing. He has been an author or coauthor of several journal articles and conference papers and a couple of book sections. He is
the editorinchief of the Journal of Computing and Security. He has served as the chairman of the Computer Engineering department for three terms and now is the chairman of newly established department of Artificial Intelligent and Multimedia Engineering all at the University of Isfahan. He has collaboration with internationally known institutions and peers and served as research scholar at the National University of Ireland (summer 2011), and the University of
California, Irvine (2012), He was listed as Who's Who in the World 2011®.
[Figure 1], [Figure 2]
[Table 1], [Table 2], [Table 3]
This article has been cited by  1 
A chaotic owl search algorithm based bilateral negotiation model 

 Walaa H. ElAshmawi,Diaa Salama Abd Elminaam,Ayman M. Nabil,Esraa Eldesouky   Ain Shams Engineering Journal. 2020;   [Pubmed]  [DOI]   2 
Optimizing Routing Path Selection Method Particle Swarm Optimization 

 Kai Guo,Yang Lv   International Journal of Pattern Recognition and Artificial Intelligence. 2020;   [Pubmed]  [DOI]   3 
A hyperheuristic for improving the initial population of whale optimization algorithm 

 Mohamed Abd Elaziz,Seyedali Mirjalili   KnowledgeBased Systems. 2019; 172: 42   [Pubmed]  [DOI]   4 
A new chaotic teaching learning based optimization for frequency reconfigurable antennas design 

 Fatemeh Zadehparizi,Shahrokh Jam   Journal of Intelligent & Fuzzy Systems. 2019; 36(2): 1353   [Pubmed]  [DOI]   5 
A novel chaotic optimal foraging algorithm for unconstrained and constrained problems and its application in white blood cell segmentation 

 Gehad Ismail Sayed,Mona Solyman,Aboul Ella Hassanien   Neural Computing and Applications. 2018;   [Pubmed]  [DOI]   6 
Dynamic Group Optimization Algorithm With Embedded Chaos 

 Rui Tang,Simon Fong,Raymond K. Wong,Kelvin K. L. Wong   IEEE Access. 2018; 6: 22728   [Pubmed]  [DOI]   7 
Association of specific gene mutations derived from machine learning with survival in lung adenocarcinoma 

 HanJun Cho,Soonchul Lee,Young Geon Ji,Dong Hyeon Lee,Masaru Katoh   PLOS ONE. 2018; 13(11): e0207204   [Pubmed]  [DOI]   8 
A MetaHeuristic Model for Data Classification Using Target Optimization 

 Rabindra K. Barik,Rojalina Priyadarshini,Nilamadhab Dash   International Journal of Applied Metaheuristic Computing. 2017; 8(3): 24   [Pubmed]  [DOI]  



