Evolutionary multimodal optimization
In applied mathematics, multimodal optimization deals with Optimization (mathematics) tasks that involve finding all or most of the multiple solutions (as opposed to a single best solution).

In applied mathematicsmultimodal optimization deals with Optimization (mathematics) tasks that involve finding all or most of the multiple solutions (as opposed to a single best solution).

 

Contents

  [hide
  • 1 Motivation
  • 2 Background
  • 3 Multimodal optimization using GAs
  • 4 Multimodal optimization using DE
  • 5 Multimodal optimization using Swarm Intelligence based algorithms
  • 6 See also
  • 7 References
  • 8 Bibliography
  • 9 External links

 

Motivation[edit]

Knowledge of multiple solutions to an optimization task is especially helpful in engineering, when due to physical (and/or cost) constraints, the best results may not always be realizable. In such a scenario, if multiple solutions (local and global) are known, the implementation can be quickly switched to another solution and still obtain an optimal system performance. Multiple solutions could also be analyzed to discover hidden properties (or relationships), which makes them high-performing. In addition, the algorithms for multimodal optimization usually not only locate multiple optima in a single run, but also preserve their population diversity, resulting in their global optimization ability on multimodal functions. Moreover, the techniques for multimodal optimization are usually borrowed as diversity maintenance techniques to other problems.[1]

Background[edit]

Classical techniques of optimization would need multiple restart points and multiple runs in the hope that a different solution may be discovered every run, with no guarantee however. Evolutionary algorithms (EAs) due to their population based approach, provide a natural advantage over classical optimization techniques. They maintain a population of possible solutions, which are processed every generation, and if the multiple solutions can be preserved over all these generations, then at termination of the algorithm we will have multiple good solutions, rather than only the best solution. Note that, this is against the natural tendency of EAs, which will always converge to the best solution, or a sub-optimal solution (in a rugged, “badly behaving” function). Findingand Maintenance of multiple solutions is wherein lies the challenge of using EAs for multi-modal optimization. Niching [2] is a generic term referred to as the technique of finding and preserving multiple stable niches, or favorable parts of the solution space possibly around multiple solutions, so as to prevent convergence to a single solution.

The field of EAs today encompass Genetic Algorithms (GAs), Differential evolution (DE), Particle Swarm Optimization (PSO), Evolution strategy (ES) among others. Attempts have been made to solve multi-modal optimization in all these realms and most, if not all the various methods implement niching in some form or the other.

Multimodal optimization using GAs[edit]

Petrwoski’s clearing method, Goldberg’s sharing function approach, restricted mating, maintaining multiple subpopulations are some of the popular approaches that have been proposed by the GA Community. The first two methods are very well studied and respected in the GA community.

Recently, an Evolutionary Multiobjective optimization (EMO) approach was proposed,[3] in which a suitable second objective is added to the originally single objective multimodal optimization problem, so that the multiple solutions form a weak pareto-optimalfront. Hence, the multimodal optimization problem can be solved for its multiple solutions using an EMO algorithm. Improving upon their work,[4] the same authors have made their algorithm self-adaptive, thus eliminating the need for pre-specifying the parameters.

An approach that does not use any radius for separating the population into subpopulations (or species) but employs the space topology instead is proposed in.[5]

Finding multiple optima using Genetic Algorithms in a Multi-modal optimization task (The algorithm demonstrated in this demo is the one proposed by Deb, Saha in the multi-objective approach to multimodal optimization)

Multimodal optimization using DE[edit]

The niching methods used in GAs have also been explored with success in the DE community. DE based local selection and global selection approaches have also been attempted for solving multi-modal problems. DE's coupled with local search algorithms (Memetic DE) have been explored as an approach to solve multi-modal problems.

For a comprehensive treatment of multimodal optimization methods in DE, refer the Ph.D thesis Ronkkonen, J. (2009). Continuous Multimodal Global Optimization with Differential Evolution Based Methods.[6]

Multimodal optimization using Swarm Intelligence based algorithms[edit]

Glowworm swarm optimization (GSO) is a swarm intelligence based algorithm, introduced by K.N. Krishnanand and D. Ghose in 2005, for simultaneous computation of multiple optima of multimodal functions.[7][8][9][10] The algorithm shares a few features with some better known algorithms, such as ant colony optimization and particle swarm optimization, but with several significant differences. The agents in GSO are thought of as glowworms that carry a luminescence quantity called luciferin along with them. The glowworms encode the fitness of their current locations, evaluated using the objective function, into a luciferin value that they broadcast to their neighbors. The glowworm identifies its neighbors and computes its movements by exploiting an adaptive neighborhood, which is bounded above by its sensor range. Each glowworm selects, using a probabilistic mechanism, a neighbor that has a luciferin value higher than its own and moves toward it. These movements—based only on local information and selective neighbor interactions—enable the swarm of glowworms to partition into disjoint subgroups that converge on multiple optima of a given multimodal function.

See also[edit]

References[edit]

  1. Jump up^ Wong,K.C. et al. (2012), Evolutionary multimodal optimization using the principle of locality Information Sciences
  2. Jump up^ Mahfoud, S.W. (1995), "Niching methods for genetic algorithms"
  3. Jump up^ Deb,K., Saha,A. (2010) "Finding Multiple Solutions for Multimodal Optimization Problems Using a Multi-Objective Evolutionary Approach" (GECCO 2010, In press)
  4. Jump up^ Saha,A., Deb, K. (2010) "A Bi-criterion Approach to Multimodal Optimization: Self-adaptive Approach " (Lecture Notes in Computer Science, 2010, Volume 6457/2010, 95-104)
  5. Jump up^ C. Stoean, M. Preuss, R. Stoean, D. Dumitrescu (2010) Multimodal Optimization by means of a Topological Species Conservation Algorithm. In IEEE Transactions on Evolutionary Computation, Vol. 14, Issue 6, pages 842-864, 2010.
  6. Jump up^ Ronkkonen,J., (2009). Continuous Multimodal Global Optimization with Differential Evolution Based Methods
  7. Jump up^ K.N. Krishnanand and D. Ghose (2005). Detection of multiple source locations using a glowworm metaphor with applications to collective robotics. IEEE Swarm Intelligence Symposium, Pasadena, California, USA, pp. 84–91,
  8. Jump up^ K.N. Krishnanand and D. Ghose. Glowworm swarm optimization for simultaneous capture of multiple local optima of multimodal functions. Swarm Intelligence, Vol. 3, No. 2, pp. 87–124, June 2009.
  9. Jump up^ K.N. Krishnanand and D. Ghose. (2008). Theoretical foundations for rendezvous of glowworm-inspired agent swarms at multiple locations. Robotics and Autonomous Systems, 56(7): 549–569.
  10. Jump up^ K.N. Krishnanand and D. Ghose. (2006). Glowworm swarm based optimization algorithm for multimodal functions with collective robotics applications. Multi-agent and Grid Systems, 2(3): 209–222.

Bibliography[edit]

External links[edit]

CONTEXT(Help)
-
Machine Learning Methods & Algorithms »Machine Learning Methods & Algorithms
Reinforcement learning »Reinforcement learning
Evolutionary multimodal optimization
+Comments (0)
+Citations (0)
+About