Sent off-list

Thanks Bill and Duncan. I only asked for advice but I got an education too.

Michael

On 03/08/2021 21:24, Bill Dunlap wrote:
In maximum likelihood problems, even when the individual density values are fairly far from zero, their product may underflow to zero. Optimizers have problems when there is a large flat area.
    > q <- runif(n=1000, -0.1, +0.1)
    > prod(dnorm(q))
    [1] 0
    > sum(dnorm(q, log=TRUE))
    [1] -920.6556

A more minor advantage for some probability-related functions is speed. E.g., dnorm(log=TRUE,...) does not need to evaluate exp().
    > q <- runif(1e6, -10, 10)
    > system.time(for(i in 1:100)dnorm(q, log=FALSE))
       user  system elapsed
       9.13    0.11    9.23
    > system.time(for(i in 1:100)dnorm(q, log=TRUE))
       user  system elapsed
       4.60    0.19    4.78

  -Bill

On Tue, Aug 3, 2021 at 11:53 AM Duncan Murdoch <murdoch.dun...@gmail.com <mailto:murdoch.dun...@gmail.com>> wrote:

    On 03/08/2021 12:20 p.m., Michael Dewey wrote:
     > Short version
     >
     > Apart from the ability to work with values of p too small to be
    of much
     > practical use what are the advantages and disadvantages of
    setting this
     > to TRUE?
     >
     > Longer version
     >
     > I am contemplating upgrading various functions in one of my
    packages to
     > use this and as far as I can see it would only have the advantage of
     > allowing people to use very small p-values but before I go ahead
    have I
     > missed anything? I am most concerned with negatives but if there
    is any
     > other advantage I would mention that in the vignette. I am not
    concerned
     > about speed or the extra effort in coding and expanding the
    documentation.
     >

    These are often needed in likelihood problems.  In just about any
    problem where the normal density shows up in the likelihood, you're
    better off working with the log likelihood and setting log = TRUE in
    dnorm, because sometimes you want to evaluate the likelihood very far
    from its mode.

    The same sort of thing happens with pnorm for similar reasons.  Some
    likelihoods involve normal integrals and will need it.

    I can't think of an example for qnorm off the top of my head, but I
    imagine there are some:  maybe involving simulation way out in the
    tails.

    The main negative about using logs is that they aren't always needed.

    Duncan Murdoch

    ______________________________________________
    R-help@r-project.org <mailto:R-help@r-project.org> mailing list --
    To UNSUBSCRIBE and more, see
    https://stat.ethz.ch/mailman/listinfo/r-help
    <https://stat.ethz.ch/mailman/listinfo/r-help>
    PLEASE do read the posting guide
    http://www.R-project.org/posting-guide.html
    <http://www.R-project.org/posting-guide.html>
    and provide commented, minimal, self-contained, reproducible code.


<http://www.avg.com/email-signature?utm_medium=email&utm_source=link&utm_campaign=sig-email&utm_content=emailclient> Virus-free. www.avg.com <http://www.avg.com/email-signature?utm_medium=email&utm_source=link&utm_campaign=sig-email&utm_content=emailclient>

<#DAB4FAD8-2DD7-40BB-A1B8-4E2AA1F9FDF2>

--
Michael
http://www.dewey.myzen.co.uk/home.html

______________________________________________
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.

Reply via email to