Other Strategies

  1. Different elite sample sizes. The idea here is to update various model parameters using different ``elite'' samples. For example, for the normal distribution one could update the mean mu using only the best elite sample, and update the standard deviation sigma using the best 10 elite samples.

  2. Different smoothing parameters. Here we use different smoothing parameters for different model parameters. For example, for the normal distribution one could update the mean mu using parameter alpha and for the standard deviation sigma hte smoothing parameter beta.
  3. Dependent components. For a continuous optimization problem the standard sampling distribution is the normal (= Gaussian) distribution with independent components. However, the generalization to a multivariate normal sampling distribution is not difficult and can significantly increase the efficiency of the CE method. On the other hand, in order to reliably estimate the increased number of parameters, one typically needs a larger sample size.

  4. Discretisation of a continuous problem. The idea is to translate a continuous optimization problem into a disrete one using a binary representation of the decision variables. This is often done in Genetic Algorithms [1], [5]. The problem can now be solved by generating the binary representations (vectors) via independent Bernoulli random variables; and the updating steps are the same as in the max-cut problem.



cetoolbox www user
2004-12-17