Alvaro Herrera <[EMAIL PROTECTED]> writes:
> henk de wit escribió:
>>> How is the memory consumed? How are you measuring it? I assume you
>>> mean the postgres process that is running the query uses the memory.
>>> If so, which tool(s) are you using and what's the output that shows it
>>> being us
henk de wit escribió:
> > How is the memory consumed? How are you measuring it? I assume you
> > mean the postgres process that is running the query uses the memory.
> > If so, which tool(s) are you using and what's the output that shows it
> > being used?
>
> It's periodically measured and recor
On Oct 12, 2007, at 4:48 PM, henk de wit wrote:
> > I have work_mem set to 256MB.
> Wow. That's inordinately high. I'd recommend dropping that to
32-43MB.
Ok, it seems I was totally wrong with the work_mem setting. I'll
adjust it to a more saner level. Thanks a lot for the advice everyone!
> > I have work_mem set to 256MB.
> Wow. That's inordinately high. I'd recommend dropping that to 32-43MB.
Ok, it seems I was totally wrong with the work_mem setting. I'll adjust it to a
more saner level. Thanks a lot for the advice everyone!
> Explain is your friend in that respect.
It sh
On Oct 12, 2007, at 4:09 PM, henk de wit wrote:
> It looks to me like you have work_mem set optimistically large. This
> query seems to be doing *many* large sorts and hashes:
I have work_mem set to 256MB. Reading in PG documentation I now
realize that "several sort or hash operations might b
> It looks to me like you have work_mem set optimistically large. This
> query seems to be doing *many* large sorts and hashes:
I
have work_mem set to 256MB. Reading in PG documentation I now realize
that "several sort or hash operations might be running in parallel". So
this is most likely the
henk de wit <[EMAIL PROTECTED]> writes:
> I indeed found them in the logs. Here they are:
It looks to me like you have work_mem set optimistically large. This
query seems to be doing *many* large sorts and hashes:
> HashBatchContext: 262144236 total in 42 blocks; 3977832 free (40 chunks);
> 258
henk de wit <[EMAIL PROTECTED]> writes:
> I'm not exactly sure what to look for in the log. I'll do my best though an=
> d see what I can come up with.
It'll be a bunch of lines like
TopMemoryContext: 49832 total in 6 blocks; 8528 free (6 chunks); 41304 used
immediately in front of the out-of-me
> This error should have produced a map of per-context memory use in the>
> postmaster log.
> Please show us that.
I'm not exactly sure what to look for in the log. I'll do my best though and
see what I can come up with.
_
Expre
> How is the memory consumed? How are you measuring it? I assume you
> mean the postgres process that is running the query uses the memory.
> If so, which tool(s) are you using and what's the output that shows it
> being used?
It's periodically measured and recorded by a script from which the rel
Erik Jones <[EMAIL PROTECTED]> writes:
> Tom, are there any docs anywhere that explain how to interpret those =20
> per-context memory dumps?
No, not really. What you have to do is grovel around in the code and
see where contexts with particular names might get created.
> For example, when I see
On Oct 11, 2007, at 9:51 AM, Tom Lane wrote:
henk de wit <[EMAIL PROTECTED]> writes:
ERROR: out of memory
DETAIL: Failed on request of size 4194304.
This error should have produced a map of per-context memory use in the
postmaster log. Please show us that.
regards,
henk de wit <[EMAIL PROTECTED]> writes:
> ERROR: out of memory
> DETAIL: Failed on request of size 4194304.
This error should have produced a map of per-context memory use in the
postmaster log. Please show us that.
regards, tom lane
---(end of
On 10/11/07, henk de wit <[EMAIL PROTECTED]> wrote:
>
> Hi,
>
> I'm running into a problem with PostgreSQL 8.2.4 (running on 32 bit Debian
> Etch/2x dual core C2D/8GB mem). The thing is that I have a huge transaction
> that does 2 things: 1) delete about 300.000 rows from a table with about 15
> mi
henk de wit wrote:
Hi,
I'm running into a problem with PostgreSQL 8.2.4 (running on 32 bit
Debian Etch/2x dual core C2D/8GB mem). The thing is that I have a
huge transaction that does 2 things: 1) delete about 300.000 rows
from a table with about 15 million rows and 2) do some (heavy)
calculatio
Hi,
I'm running into a problem with PostgreSQL 8.2.4 (running on 32 bit Debian
Etch/2x dual core C2D/8GB mem). The thing is that I have a huge transaction
that does 2 things: 1) delete about 300.000 rows from a table with about 15
million rows and 2) do some (heavy) calculations and re-insert a
16 matches
Mail list logo