On Feb 5, 2008, at 4:28 PM, Jeff Davis wrote:
On Mon, 2008-02-04 at 16:11 -0600, Erik Jones wrote:
Are you sure the postmaster is being launched
under ulimit unlimited?
ulimit -a gives:
core file size(blocks, -c) unlimited
data seg size (kbytes, -d) unlimited
file size
On Mon, 2008-02-04 at 16:11 -0600, Erik Jones wrote:
> > Are you sure the postmaster is being launched
> > under ulimit unlimited?
>
> ulimit -a gives:
>
> core file size(blocks, -c) unlimited
> data seg size (kbytes, -d) unlimited
> file size (blocks, -f) unlimited
>
I wrote:
> ... I'm wondering a bit why
> CacheMemoryContext has so much free space in it, but even if it had none
> you'd still be at risk.
I tried to reproduce this by creating a whole lot of trivial tables and
then pg_dump'ing them:
create table t0 (f1 int primary key); insert into t0 values(0)
Erik Jones <[EMAIL PROTECTED]> writes:
> On Feb 4, 2008, at 3:26 PM, Tom Lane wrote:
>> Are you sure the postmaster is being launched
>> under ulimit unlimited?
> ulimit -a gives:
One possible gotcha is that ulimit in an interactive shell isn't
necessarily the same environment that an init script
On Feb 4, 2008, at 3:26 PM, Tom Lane wrote:
Erik Jones <[EMAIL PROTECTED]> writes:
Sure. I've attached an archive with the full memory context and
error for each. Note that I'm already 99% sure that this is due to
our exorbitantly large relation set which is why I think pg_dump's
catalog que
Erik Jones <[EMAIL PROTECTED]> writes:
> Sure. I've attached an archive with the full memory context and
> error for each. Note that I'm already 99% sure that this is due to
> our exorbitantly large relation set which is why I think pg_dump's
> catalog queries are running out of work_mem (c
On Feb 4, 2008, at 1:27 PM, Tom Lane wrote:
We'd need to see more details to really give decent advice. Exactly
what queries and exactly what was the error message (in particular
I'm wondering how large the failed request was)? Which PG version?
Can you get the memory context dump out of the
Erik Jones <[EMAIL PROTECTED]> writes:
> Hello, this past weekend I received a couple of Out of Memory errors
> while running pg_dump for two different selects against the
> catalogs, one with pg_get_viewdef() and the other with one of the
> pg_index join pg_class left join pg_depend queries
Hello, this past weekend I received a couple of Out of Memory errors
while running pg_dump for two different selects against the
catalogs, one with pg_get_viewdef() and the other with one of the
pg_index join pg_class left join pg_depend queries). Is it work_mem
I should be increasing wit