On Fri, Aug 23, 2019 at 09:17:38AM -0400, Gunther wrote:
Hi all, I am connecting to a discussion back from April this year. My
data has grown and now I am running into new out of memory situations.
Meanwhile the world turned from 11.2 to 11.5 which I just installed
only to find the same out of
On Sat, Aug 24, 2019 at 11:40:09AM -0400, Gunther wrote:
Thanks Tom, yes I'd say it's using a lot of memory, but wouldn't call
it "leak" as it doesn't grow during the 30 min or so that this query
runs. It explodes to 4GB and then stays flat until done.
Well, the memory context stats you've s
On Thu, Sep 19, 2019 at 07:00:01PM +0200, Joao Junior wrote:
A table with 800 gb means 800 files of 1 gb. When I use truncate or drop
table, xfs that is a log based filesystem, will write lots of data in its
log and this is the problem. The problem is not postgres, it is the way
that xfs works
On Tue, Oct 01, 2019 at 11:42:33PM +1000, Behrang Saeedzadeh wrote:
Thanks. That eliminated the bottleneck!
Any ideas why adding ORDER BY to the subquery also changes the plan in a
way that eliminates the bottleneck?
IIRC the ORDER BY clause makes it impossible to "collapse" the subquery
into
On Fri, Oct 04, 2019 at 07:28:54PM +0530, nikhil raj wrote:
Hi Justin,
Its been executing for 35 + mins due to statement time out its getting
canceled.
Well, without a query plan it's really hard to give you any advice. We
need to see at least EXPLAIN output (without analyze) to get an idea o