This is a huge topic.

Differential evolution (DEoptim package) would be one good starting point; there is a simulated annealing method built into optim() (method = "SANN") but it usually requires significant tuning.

  Also genetic algorithms.

You could look at the NLopt list of algorithms https://nlopt.readthedocs.io/en/latest/NLopt_Algorithms/ , focusing on the options for derivative-free global optimization , and then use them via the nloptr package.

  Good luck ...

On 2023-08-13 3:28 p.m., Hans W wrote:
While working on 'random walk' applications, I got interested in
optimizing noisy objective functions. As an (artificial) example, the
following is the Rosenbrock function, where Gaussian noise of standard
deviation `sd = 0.01` is added to the function value.

     fn <- function(x)
               (1+rnorm(1, sd=0.01)) * adagio::fnRosenbrock(x)

To smooth out the noise, define another function `fnk(x, k = 1)` that
calls `fn` k times and returns the mean value of those k function
applications.

     fnk <- function(x, k = 1) {     # fnk(x) same as fn(x)
         rv = 0.0
         for (i in 1:k) rv <- rv + fn(x)
         return(rv/n)
     }

When we apply several optimization solvers to this noisy and smoothed
noise functions we get for instance the following results:
(Starting point is always `rep(0.1, 5)`, maximal number of iterations 5000,
  relative tolerance 1e-12, and the optimization is successful if the
function value at the minimum is below 1e-06.)

       k   nmk       anms neldermead     ucminf optim_BFGS
      ---------------------------------------------------
       1  0.21       0.32       0.13       0.00       0.00
       3  0.52       0.63       0.50       0.00       0.00
      10  0.81       0.91       0.87       0.00       0.00

Solvers: nmk = dfoptim::nmk, anms = pracma::anms [both Nelder-Mead codes]
          neldermead = nloptr::neldermead,
          ucminf = ucminf::ucminf, optim_BFGS = optim with method "BFGS"

Read the table as follows: `nmk` will be successful in 21% of the
trials, while for example `optim` will never come close to the true
minimum.

I think it is reasonable to assume that gradient-based methods do
poorly with noisy objectives, though I did not expect to see them fail
so clearly. On the other hand, Nelder-Mead implementations do quite
well if there is not too much noise.

In real-world applications, it will often not be possible to do the
same measurement several times. That is, we will then have to live
with `k = 1`. In my applications with long 'random walks', doing the
calculations several times in a row will really need some time.

QUESTION: What could be other approaches to minimize noisy functions?

I looked through some "Stochastic Programming" tutorials and did not
find them very helpful in this situation. Of course, I might have
looked into these works too superficially.

______________________________________________
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.

______________________________________________
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.

Reply via email to