I've done the upgrade to 9.5. Memory bloat has reduced to a more
manageable level. Most workers have an overhead of <20MB, with one outlier
consuming 60MB.
— Theron
On Fri, Aug 26, 2016 at 5:41 AM, Tom Lane wrote:
> Theron Luhn writes:
> > Okay, I got a semi-reproducible test case:
> > http
Theron Luhn writes:
> Okay, I got a semi-reproducible test case:
> https://gist.github.com/luhn/2b35a9b31255e3a6a2e6a06d1213dfc9
> The one caveat is that the memory rise only happens when using a
> HashAggregate query plan (included in the gist), which I can't find a way
> to get Postgres to reli
Okay, I got a semi-reproducible test case:
https://gist.github.com/luhn/2b35a9b31255e3a6a2e6a06d1213dfc9
The one caveat is that the memory rise only happens when using a
HashAggregate query plan (included in the gist), which I can't find a way
to get Postgres to reliably use.
If you need it, I co
Theron Luhn writes:
> Okay, here's the output:
> https://gist.github.com/luhn/a39db625ba5eed90946dd4a196d12220
Hm, well the only thing there that looks even slightly out of the
ordinary is the amount of free space in TopMemoryContext itself:
TopMemoryContext: 3525712 total in 432 blocks; 3444272
On 8/25/2016 9:58 AM, Theron Luhn wrote:
> I do not remember exact formula, but it should be something like
“work_mem*max_connections + shared_buffers” and it should be around
80% of your machine RAM (minus RAM used by other processes and
kernel). It will save you from OOM.
a single query
Hi Ilya,
> Are you talking about buffers/cache increased? AFAIK this memory is used
by kernel as buffer before any block device (HDD for example).
If I'm reading the output correctly, buffers/cached do not increase. I'm
looking at the 248MB -> 312MB under the "used" column in the "-/+
buffers/ca
Okay, here's the output:
https://gist.github.com/luhn/a39db625ba5eed90946dd4a196d12220
— Theron
On Thu, Aug 25, 2016 at 12:34 PM, Tom Lane wrote:
> Theron Luhn writes:
> >> It would be worth using plain old top to watch this process. We have
> >> enough experience with that to be pretty sure
Theron Luhn writes:
>> It would be worth using plain old top to watch this process. We have
>> enough experience with that to be pretty sure how to interpret its
>> numbers: "RES minus SHR" is the value to be worried about.
> Sure thing.
> https://gist.github.com/luhn/e09522d524354d96d297b153d
> It would be worth using plain old top to watch this process. We have
> enough experience with that to be pretty sure how to interpret its
> numbers: "RES minus SHR" is the value to be worried about.
Sure thing. https://gist.github.com/luhn/e09522d524354d96d297b153d1479c
13#file-top-txt
RES -
On Thu, Aug 25, 2016 at 1:25 PM, Tom Lane wrote:
> Hmm. I find it mighty suspicious that the USS, PSS, and RSS numbers are
> all increasing to pretty much the same tune, ie from very little to circa
> 100MB. I think there is a decent chance that smem is not doing what it
> says on the tin, and
Theron Luhn writes:
>> If it's not an outright leak, it's probably consumption of cache space.
>> We cache stuff that we've read from system catalogs, so sessions that
>> touch lots of tables (like thousands) can grow due to that. Another
>> possible source of large cache consumption is calling l
> 9.3.which? We do fix memory leaks from time to time ...
9.3.14
> If it's not an outright leak, it's probably consumption of cache space.
> We cache stuff that we've read from system catalogs, so sessions that
> touch lots of tables (like thousands) can grow due to that. Another
> possible sou
Theron Luhn writes:
> I have an application that uses Postgres 9.3 as the primary datastore.
9.3.which? We do fix memory leaks from time to time ...
> Some of these queries use quite a bit of memory. I've observed a
> "high-water mark" behavior in memory usage: running a query increases the
>
$ free -h # Before the query
total used free sharedbuffers cached
Mem: 7.8G 5.2G 2.6G 212M90M 4.9G
-/+ buffers/cache: 248M 7.6G
Swap: 0B 0B 0B
$ free -h # After the query
I have an application that uses Postgres 9.3 as the primary datastore.
Like any real-life application, it's not all roses—There are many ugly,
convoluted, and inefficient queries.
Some of these queries use quite a bit of memory. I've observed a
"high-water mark" behavior in memory usage: running
15 matches
Mail list logo