Re: Out of Memory errors are frustrating as heck!

2019-04-15 Thread Gunther
I saw your replies, if there was a way of using gdb commands to have a conditional breakpoint which will only fire if the n-th caller in the chain is not a certain source location, then one could exclude the bulk of these allocations and focus better. But I decided I try to re-factor this quer

Re: Out of Memory errors are frustrating as heck!

2019-04-15 Thread Tom Lane
Gunther writes: > So what I am wondering now, is there seems to be an EXPLOSION of memory > consumption near the time of the crash. That ExecutorState has > 2,238,648,944 but just until the very last second(s) the RES memory as > per top was 1.5 GB I swear. That's not hugely surprising really,

Re: Out of Memory errors are frustrating as heck!

2019-04-15 Thread Tom Lane
Gunther writes: > Is there a way of dumping that memory map info during normal runtime, by > calling a function with the debugger? Sure, "call MemoryContextStats(TopMemoryContext)" (or actually, since you know which context is the problematic one, just print that one context)

Re: Out of Memory errors are frustrating as heck!

2019-04-15 Thread Gunther
On 4/15/2019 21:49, Gunther wrote: I'm going to try without that DISTINCT step, or perhaps by dismantling this query until it works without this excessive memory growth. It also failed. Out of memory. The resident memory size of the backend was 1.5 GB before it crashed. TopMemoryContext: 4

Re: Out of Memory errors are frustrating as heck!

2019-04-15 Thread Alvaro Herrera
On 2019-Apr-15, Tom Lane wrote: > It's barely conceivable that in your particular query, there's something > acting to break that which doesn't manifest typically; but I think it's > much more likely that you simply haven't found the culprit allocation. > It's quite feasible that many many ExecHas

Re: Out of Memory errors are frustrating as heck!

2019-04-15 Thread Tom Lane
Gunther writes: > Tom (BTW, your mail server rejects my direct mail, [ raised eyebrow ] It's coming through fine AFAICS. >> I'm pretty sure that's not the droid we're looking for. >> ExecHashJoinGetSavedTuple does palloc a new tuple, but it immediately >> sticks it into a TupleTableSlot that wi

Re: PostgreSQL upgrade.

2019-04-15 Thread Mark Kirkwood
On 15/04/19 2:26 PM, Mark Kirkwood wrote: > > On 10/04/19 8:20 PM, Andreas Kretschmer wrote: >> >> >> Am 10.04.19 um 07:40 schrieb Daulat Ram: >>> We have two node postgresql database version 9.6 with streaming >>> replication which is running on docker environment, os Linux >>> (Ubuntu) and we hav

Re: Out of Memory errors are frustrating as heck!

2019-04-15 Thread Gunther
Wow, we are getting somewhere. Tom (BTW, your mail server rejects my direct mail, but I'm glad you got it through the list), you say: I'm pretty sure that's not the droid we're looking for. ExecHashJoinGetSavedTuple does palloc a new tuple, but it immediately sticks it into a TupleTableSlot t

Re: Out of Memory errors are frustrating as heck!

2019-04-15 Thread Jeff Janes
On Mon, Apr 15, 2019 at 12:34 PM Gunther wrote: > Breakpoint 2, AllocSetAlloc (context=0x1168230, size=8272) at aset.c:715 > 715 { > (gdb) p context->name > $8 = 0x96ce5b "ExecutorState" > > I think that the above one might have been the one you wanted. > I guess I should run this for a lit

Re: Out of Memory errors are frustrating as heck!

2019-04-15 Thread Tom Lane
Alvaro Herrera writes: > Seems that ExecHashJoinGetSavedTuple stores a minimalTuple and sets the > shouldFree flag to "true", and then in ExecHashJoinNewBatch, callee > ExecFetchSlotMinimalTuple sets shouldFree to false inconditionally when > the slot uses minimal tuple ops. Maybe that's correct,

Re: Out of Memory errors are frustrating as heck!

2019-04-15 Thread Alvaro Herrera
On 2019-Apr-15, Gunther wrote: > #0 AllocSetAlloc (context=0x1168230, size=385) at aset.c:715 > #1 0x0084e6cd in palloc (size=385) at mcxt.c:938 > #2 0x0061019c in ExecHashJoinGetSavedTuple > (file=file@entry=0x8bbc528, hashvalue=hashvalue@entry=0x7fff2e4ca76c, > tupleSlot=

Re: Out of Memory errors are frustrating as heck!

2019-04-15 Thread Tom Lane
Gunther writes: > Now I give you a bt so we have something to look at: > #0 AllocSetAlloc (context=0x1168230, size=385) at aset.c:715 > #1 0x0084e6cd in palloc (size=385) at mcxt.c:938 > #2 0x0061019c in ExecHashJoinGetSavedTuple > (file=file@entry=0x8bbc528, hashvalue=hashval

Re: Out of Memory errors are frustrating as heck!

2019-04-15 Thread Gunther
OK Guys, you are very kind to continue taking an interest in this matter. I will try what I can to help squish the bug. Tomas Vondra just added a good idea that explains why I get the out of memory with still having so much cache available: # sysctl vm.overcommit_memory vm.overcommit_memory =

Re: Out of Memory errors are frustrating as heck!

2019-04-15 Thread Jeff Janes
On Mon, Apr 15, 2019 at 11:28 AM Tom Lane wrote: > Jeff Janes writes: > > To get it to happen faster, maybe you could run the server with a small > > setting of "ulimit -v"? Or, you could try to capture it live in gdb. > > Unfortunately I don't know how to set a breakpoint for allocations into

Re: Block / Page Size Optimization

2019-04-15 Thread Tomas Vondra
On Mon, Apr 08, 2019 at 11:09:07AM -0400, Gunther wrote: Hi all, I am sure this should be a FAQ, but I can't see a definitive answer, only chatter on various lists and forums. Default page size of PostgreSQL is 8192 bytes. Default IO block size in Linux is 4096 bytes. I can set an XFS file sy

Re: Out of Memory errors are frustrating as heck!

2019-04-15 Thread Tom Lane
I wrote: > If you can use gdb at all, it's not that hard to break on allocations > into a specific context; I've done it many times. The strategy is > basically > 1. Let query run long enough for memory usage to start increasing, > then attach to backend with gdb. BTW, just to clarify that strate

Re: Out of Memory errors are frustrating as heck!

2019-04-15 Thread Tomas Vondra
On Sun, Apr 14, 2019 at 05:19:50PM -0400, Jeff Janes wrote: On Sun, Apr 14, 2019 at 4:51 PM Gunther wrote: For weeks now, I am banging my head at an "out of memory" situation. There is only one query I am running on an 8 GB system, whatever I try, I get knocked out on this out of

Re: Out of Memory errors are frustrating as heck!

2019-04-15 Thread Tom Lane
Jeff Janes writes: > To get it to happen faster, maybe you could run the server with a small > setting of "ulimit -v"? Or, you could try to capture it live in gdb. > Unfortunately I don't know how to set a breakpoint for allocations into a > specific context, and setting a breakpoint for any memo

Re: Out of Memory errors are frustrating as heck!

2019-04-15 Thread Tomas Vondra
On Sun, Apr 14, 2019 at 11:59:45PM -0400, Gunther wrote: On 4/14/2019 23:24, Tom Lane wrote: ExecutorState: 2234123384 total in 266261 blocks; 3782328 free (17244 chunks); 2230341056 used Oooh, that looks like a memory leak right enough. The ExecutorState should not get that big for an

Re: Out of Memory errors are frustrating as heck!

2019-04-15 Thread Jeff Janes
On Sun, Apr 14, 2019 at 11:59 PM Gunther wrote: > Is there any doubt that this might be a problem with Linux? Because if > you want, I can whip out a FreeBSD machine, compile pgsql, and attach > the same disk, and try it there. I am longing to have a reason to move > back to FreeBSD anyway. But

Re: Out of Memory errors are frustrating as heck!

2019-04-15 Thread Jeff Janes
On Sun, Apr 14, 2019 at 11:04 PM Gunther wrote: > Could you rerun the query with \set VERBOSITY verbose to show the file/line > that's failing ? > > Here goes: > > integrator=# \set VERBOSITY verbose > integrator=# SET ENABLE_NESTLOOP TO OFF; > SET > integrator=# INSERT INTO reports.BusinessOpera