ARTICLE IN PRESS
JID: INS [m3Gsc;July 16, 2015;19:41]
Information Sciences xxx (2015) xxx–xxx
Contents lists available at ScienceDirect
Information Sciences journal homepage: www.elsevier.com/locate/ins
A new multi-objective particle swarm optimization algorithm based on decomposition
Dai Caia,∗, Wang Yupingb, Ye MiaobQ1 a College of Computer Science, Shaanxi Normal University, Xi’an 710062, China b School of Computer Science and Technology, Xidian University, Xi’an 710071, China a r t i c l e i n f o
Received 14 July 2014
Revised 16 May 2015
Accepted 4 July 2015
Available online xxx
Particle swarm optimization
Crowding distance a b s t r a c t
The diversity of solutions is of great importance for multi-objective evolutionary algorithms.
In this paper, a new multi-objective particle swarm optimization algorithm based on decomposition (MPSO/D) is proposed. Firstly, the objective space of a multi-objective problem is decomposed into a set of sub-regions based on a set of direction vectors. Then MPSO/D makes each sub-region have a solution to maintain the diversity. Secondly, considering the convergence of solutions, MPSO/D uses the crowding distance to calculate the fitness values of the reserved solutions for selection operator, and uses the neighboring particles of a particle to determine the global best historical position (gbest) of the particle. The proposed algorithm has been compared with NSGAII, MOEA/D and NNIA on sixteen test instances. The experimental results illustrate that the proposed algorithm outperforms NSGAII, MOEA/D and NNIA in terms of convergence and diversity. © 2015 Published by Elsevier Inc. 1. Introduction1
Since many real-world problems  that involve several optimization objectives or criteria are referred to as multi-objective2 optimization problems (MOPs). Unlike single-objective optimization problems, MOPs have a series of non-dominated solutions,3 also known as Pareto optimal solutions (the set of Pareto optimal solutions in the objective space is called Pareto front ).4
Therefore, multi-objective optimization algorithms for MOP should be able to: (1) discover solutions as approximated to the5 optimal solutions as possible; (2) find solutions as uniform as possible in the obtained non-dominated front; (3) determine6 solutions to cover the true Pareto Front (PF) as broad as possible. However, achieving these three goals simultaneously is still a7 challenge for multi-objective optimization algorithms.8
Among various multi-objective optimization algorithms, multi-objective evolutionary algorithms (MOEAs), which make use9 of the strategy of the population evolution, are an effectivemethod for solvingMOPs. Nowadays, there exist manyMOEAs, such as10 multi-objective genetic algorithms (GA) [11,29], multi-objective particle swarm optimization algorithms (MPSO) [25,47], multi-11 objective differential evolution algorithms [44,49], multi-objective immune clone algorithms [18,34], and group search optimizer12 . The recent literature survey indicates PSO, which is inspired by the social behavior of bird flocking or fish schooling, is a13 potential competitor of GA which has been mostly used for solving MOPs [2,23,35,39]. Although it cannot draw the conclusion14 that PSO outperforms GA, PSO has many advantages, for example, easy implementation, effective memory, efficient maintenance15 of the solution diversity [25,47], etc.16 ∗ Corresponding author. Tel.: +86 13720727536.
E-mail addresses: email@example.com (D. Cai), firstname.lastname@example.org (W. Yuping). http://dx.doi.org/10.1016/j.ins.2015.07.018 0020-0255/© 2015 Published by Elsevier Inc.
Please cite this article as: D. Cai et al., A new multi-objective particle swarm optimization algorithm based on decomposition,
Information Sciences (2015), http://dx.doi.org/10.1016/j.ins.2015.07.018 2 D. Cai et al. / Information Sciences xxx (2015) xxx–xxx
ARTICLE IN PRESS
JID: INS [m3Gsc;July 16, 2015;19:41]
However, there are two particular issues to be addressed when applying PSO to solve MOPs. The first issue in multi-objective17 particle swarm optimization is archive maintenance to balance the convergence and the diversity. External elitist archive is used18 to store the non-dominated solutions obtained by an algorithm and these obtained solutions are filtered by a certain quality19 measure, such as density. There are several proposals for MPSO where the archive size is unconstrained [1,3], the pre-fixed20 maximum size of an archive is widely applied because the number of non-dominated solutions can grow very fast, which quickly21 increases the computation cost of updating the archive. Besides, the physical memory is always finite in size. Thus, an appropriate22 archiving is necessarily required to filter these non-dominated solutionswith a lower qualitymeasures. There aremany strategies23 l24 25 e26 27 t28 29 e30 31 o32 -33 e34 y35 e36 ]37 38 f39 -40 r41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67used to update the archive, for example, many MPSOs [4,5,17,20] adopt the crowding distance  to prune the archive, Kerne density is originally proposed in  and used inMPSO , the clusteringmechanism formaintaining an archive is firstly applied in  and used in MPSOs  for keeping size of external archive constant. Another particular issue in MPSOs is the updat of global best (gbest) and personal best (pbest), because there is no absolute best solution, but rather a set of non-dominated solutions. Several methods are proposed to select gbest and pbest . The crowding distance technique can be applied to select gbes . Dynamic neighborhood strategy is proposed in  to update the gbest in MPSO. Decomposition approach  was applied to MPSOs [19,28,42] to select gbest and pbest . Ranking methods [12,16,41] are used to identify the best solutions to guide th search process.