On 2019-02-15 13:01:50 -0600, Jeremy Finzel wrote:
> It doesn't write out all of RAM, only the amount in use by the
> particular backend that crashed (plus all the shared segments attached
> by that backend, including the main shared_buffers, unless you disable
> that as previously
>
> It doesn't write out all of RAM, only the amount in use by the
> particular backend that crashed (plus all the shared segments attached
> by that backend, including the main shared_buffers, unless you disable
> that as previously mentioned).
>
> And yes, it can take a long time to generate a la
> "Jeremy" == Jeremy Finzel writes:
Jeremy> Yes Linux. This is very helpful, thanks. A follow-up question -
Jeremy> will it take postgres a really long time to crash (and
Jeremy> hopefully recover) if I have say 1T of RAM because it has to
Jeremy> write that all out to a core file first?
>
> In Linux, yes. Not sure about other OSes.
>
> You can turn off the dumping of shared memory with some unusably
> unfriendly bitwise arithmetic using the "coredump_filter" file in /proc
> for the process. (It's inherited by children, so you can just set it
> once for postmaster at server start
On 2019-Feb-15, Jeremy Finzel wrote:
> I am trying to determine the upper size limit of a core file generated for
> any given cluster. Is it feasible that it could actually be the entire
> size of the system memory + shared buffers (i.e. really huge)?
In Linux, yes. Not sure about other OSes.
I am trying to determine the upper size limit of a core file generated for
any given cluster. Is it feasible that it could actually be the entire
size of the system memory + shared buffers (i.e. really huge)?
I've done a little bit of testing of this myself, but want to be sure I am
clear on this