=?UTF-8?B?QWxleGlzIEzDqi1RdcO0Yw==?= writes:
> On Wed, Aug 10, 2011 at 2:54 PM, Tom Lane wrote:
>> Hmph. Is there other stuff being run on the same instance? Are there a
>> whole lot of active PG processes? Maybe Amazon isn't really giving you
>> a whole 8GB, or there are weird address space r
On Wed, Aug 10, 2011 at 2:54 PM, Tom Lane wrote:
> Alexis Le-Quoc writes:
>> On Wed, Aug 10, 2011 at 1:17 PM, Tom Lane wrote:
>>> However, I find it a bit odd that you're getting this failure in what
>>> appears to be a 64-bit build. That means you're not running out of
>>> address space, so yo
=?UTF-8?B?QWxleGlzIEzDqi1RdcO0Yw==?= writes:
> On Wed, Aug 10, 2011 at 1:17 PM, Tom Lane wrote:
>> However, I find it a bit odd that you're getting this failure in what
>> appears to be a 64-bit build. That means you're not running out of
>> address space, so you must actually be out of RAM+swap
On Wed, Aug 10, 2011 at 1:17 PM, Tom Lane wrote:
> Alexis Le-Quoc writes:
>> I've been hitting a "out of memory error" during autovacuum of
>> relatively large tables (compared to the amount of RAM available).
>
>> The error message is:
>> [10236]: [1-1] user=,db=,remote= ERROR: out of memory
>>
=?UTF-8?B?QWxleGlzIEzDqi1RdcO0Yw==?= writes:
> I've been hitting a "out of memory error" during autovacuum of
> relatively large tables (compared to the amount of RAM available).
> The error message is:
> [10236]: [1-1] user=,db=,remote= ERROR: out of memory
> [10236]: [2-1] user=,db=,remote= DE
Greetings,
I've been hitting a "out of memory error" during autovacuum of
relatively large tables (compared to the amount of RAM available). I'm
trying to trace the cause of the issue; the answer is somewhere below
and I don't know how to interpret the data. I can solve the issue
right now by usin
How about shared memory related settings of your kernel? ie.
kernel.shmall, kernel.shmmax . Have a check with sysctl, maybe they
should be raised:
http://www.postgresql.org/docs/8.2/interactive/kernel-resources.html
Regards
Jason Lustig wrote:
I ran "ulimit -a" for the postgres user, and her
I wrote:
> ... The weird thing about this
> is why the large maintenance_work_mem works for a regular session and
> not for autovacuum. There really shouldn't be much difference in the
> maximum workable setting for the two cases, AFAICS.
After re-reading the thread I realized that the OP is comp
Richard Huxton <[EMAIL PROTECTED]> writes:
> Hmm - odd that you're not getting any connection details.
Not really; the autovacuum process doesn't have any connection, so those
log_line_prefix fields will be left empty. The weird thing about this
is why the large maintenance_work_mem works for a r
Jason Lustig escribió:
On Oct 16, 2007, at 10:22 AM, Richard Huxton wrote:
Add some lines to /etc/security/limits.conf to increase them.
Sorry for being somewhat of a linux novice -- but what is the best way
to do this? It doesn't seem to provide matching options from ulimit to
the limits.c
On Oct 16, 2007, at 10:22 AM, Richard Huxton wrote:
Add some lines to /etc/security/limits.conf to increase them.
Sorry for being somewhat of a linux novice -- but what is the best
way to do this? It doesn't seem to provide matching options from
ulimit to the limits.conf file.
Thanks,
Ja
On Tue, 2007-10-16 at 10:14 -0400, Jason Lustig wrote:
> I ran "ulimit -a" for the postgres user, and here's what I got:
...
> max memory size (kbytes, -m) 20
> open files (-n) 100
> max user processes (-u) 100
> virtual memory (kbytes, -v) 200
On 10/16/07, Jason Lustig <[EMAIL PROTECTED]> wrote:
> I ran "ulimit -a" for the postgres user, and here's what I got:
>
> core file size (blocks, -c) 20
> data seg size (kbytes, -d) 20
> max nice(-e) 0
> file size (blocks, -f) unlimi
Jason Lustig wrote:
I ran "ulimit -a" for the postgres user, and here's what I got:
max memory size (kbytes, -m) 20
virtual memory (kbytes, -v) 20
There you go - you probably are exceeding these.
Add some lines to /etc/security/limits.conf to increase them.
--
Ri
I ran "ulimit -a" for the postgres user, and here's what I got:
core file size (blocks, -c) 20
data seg size (kbytes, -d) 20
max nice(-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 32635
max locked memor
On 10/16/07, Jason Lustig <[EMAIL PROTECTED]> wrote:
> Looking at my free memory (from TOP) I find
>
> Mem: 2062364k total, 1846696k used, 215668k free, 223324k buffers
> Swap: 2104496k total, 160k used, 2104336k free, 928216k cached
>
> So I don't think that I'm running out of mem
Jason Lustig wrote:
I lowered the maintenance_work_mem to 50MB and am still getting the same
errors:
Oct 16 09:26:57 [16402]: [1-1] user=,db= ERROR: out of memory
Oct 16 09:26:57 [16402]: [1-2] user=,db= DETAIL: Failed on request of
size 52428798.
Oct 16 09:27:57 [16421]: [1-1] user=,db=
I lowered the maintenance_work_mem to 50MB and am still getting the
same errors:
Oct 16 09:26:57 [16402]: [1-1] user=,db= ERROR: out of memory
Oct 16 09:26:57 [16402]: [1-2] user=,db= DETAIL: Failed on request
of size 52428798.
Oct 16 09:27:57 [16421]: [1-1] user=,db= ERROR: out of mem
Jason Lustig wrote:
There isn't any more error message than this... it simply repeats every
minute or so, which is really quite strange. And the user & db is really
blank in the log, I didn't trim it (if I did I would have replaced it
with dummy values).
Hmm - odd that you're not getting any
There isn't any more error message than this... it simply repeats
every minute or so, which is really quite strange. And the user & db
is really blank in the log, I didn't trim it (if I did I would have
replaced it with dummy values).
I'm using pg 8.2.4 on Linux 2.6.15.
Jason
--
Jason Lus
Not really a performance question, but...
Jason Lustig wrote:
Whenever I turn on Autovacuum on my database, I get a ton of error
messages like this in my Postgres log:
Oct 16 06:43:47 [2897]: [1-1] user=,db= ERROR: out of memory
Oct 16 06:43:47 [2897]: [1-2] user=,db= DETAIL: Failed on req
Whenever I turn on Autovacuum on my database, I get a ton of error
messages like this in my Postgres log:
Oct 16 06:43:47 [2897]: [1-1] user=,db= ERROR: out of memory
Oct 16 06:43:47 [2897]: [1-2] user=,db= DETAIL: Failed on request
of size 524287998.
It always fails on the same request
22 matches
Mail list logo