Greetings,
To all those who helped out with this problem, I'm sorry it took me so
long to respond. For the record, the matter is solved, at least for us,
but we had to repeat a lot of things to make sure.
First, the "out of memory" problem repeated itself when restoring that
single table, afte
Greetings,
To all those who helped out with this problem, I'm sorry it took me so
long to respond. For the record, the matter is solved, at least for us,
but we had to repeat a lot of things to make sure.
First, the "out of memory" problem repeated itself when restoring that
single table, after
A Qui, 14-07-2016 às 10:52 +0100, Miguel Ramos escreveu:
>
> A Qua, 13-07-2016 às 18:42 -0400, Tom Lane escreveu:
> >
> > I wrote:
> > >
> > > I'm still suspicious that this might be some sort of NOTICE-
> > > processing-
> > > related buffer bloat. Could you try loading the data with the
> >
A Sex, 15-07-2016 às 07:16 -0700, Adrian Klaver escreveu:
> On 07/15/2016 12:37 AM, Miguel Ramos wrote:
> > What else?
>
> The pg_dump file you are restoring from is a custom format.
>
> Do you have room to do something like?:
>
> 1) pg_restore -d some_db -U some_user -t inspection.positioned_s
On 07/15/2016 12:37 AM, Miguel Ramos wrote:
A Qui, 14-07-2016 às 10:52 +0100, Miguel Ramos escreveu:
A Qua, 13-07-2016 às 18:42 -0400, Tom Lane escreveu:
I wrote:
I'm still suspicious that this might be some sort of NOTICE-
processing-
related buffer bloat. Could you try loading the data
Miguel Ramos writes:
> I see (transcribed by hand from screenshot):
> ...
> pg_restore: processing data for table "inspection.positioned_scan"
> out of memory
> Process returned exit code 1.
Right, so that confirms that the OOM happens while sending data for that
table; but we're still no closer
A Qui, 14-07-2016 às 10:52 +0100, Miguel Ramos escreveu:
>
> A Qua, 13-07-2016 às 18:42 -0400, Tom Lane escreveu:
> >
> > I wrote:
> > >
> > > I'm still suspicious that this might be some sort of NOTICE-
> > > processing-
> > > related buffer bloat. Could you try loading the data with the
> >
Yes.
Both 9.1.8, I checked right now.
-- Miguel
A Qua, 13-07-2016 às 13:59 -0700, John R Pierce escreveu:
> On 7/13/2016 1:51 PM, Miguel Ramos wrote:
> > Finally, here are the log messages at the moment of the error.
> > It is clearly not while building indices.
> >
> > The table in question is
A Qua, 13-07-2016 às 18:42 -0400, Tom Lane escreveu:
> I wrote:
> > I'm still suspicious that this might be some sort of NOTICE-
> > processing-
> > related buffer bloat. Could you try loading the data with the
> > server's
> > log_min_messages level cranked down to NOTICE, so you can see from
>
A Qua, 13-07-2016 às 17:15 -0400, Tom Lane escreveu:
> Miguel Ramos writes:
> > So, what does this mean?
> > Was it the client that aborted? I think I saw that "unexpected
> > message
> > type 0x58" on other types of interruptions.
>
> Yeah, 0x58 is ASCII 'X' which is a Terminate message. Betwe
That's 3 years and 3 months with absolutely zero maintenance.
Apart from the scripts I left back then.
During that time, it was used by an average of 10 people, some 9T of
sensor data entered at the rate of 60G/week, and another 3T of analysis
data was produced.
The expression "cutting down on m
I wrote:
> I'm still suspicious that this might be some sort of NOTICE-processing-
> related buffer bloat. Could you try loading the data with the server's
> log_min_messages level cranked down to NOTICE, so you can see from the
> postmaster log whether any NOTICEs are being issued to the pg_resto
On 7/13/2016 2:11 PM, Miguel Ramos wrote:
Yes.
Both 9.1.8, I checked right now.
9.1 is up to 9.1.22, thats a lot of bug fixes you're missing. 9.1.8 was
released 2013-02-07, 9.1.22 in 2016-05-12
--
john r pierce, recycling bits in santa cruz
--
Sent via pgsql-general mailing list (pgsql-g
Miguel Ramos writes:
> So, what does this mean?
> Was it the client that aborted? I think I saw that "unexpected message
> type 0x58" on other types of interruptions.
Yeah, 0x58 is ASCII 'X' which is a Terminate message. Between that and
the unexpected-EOF report, it's quite clear that the clien
On 07/13/2016 01:51 PM, Miguel Ramos wrote:
Finally, here are the log messages at the moment of the error.
It is clearly not while building indices.
The table in question is a big one, 111GB.
Fields latitude, longitude and height are arrays of length around 500-
700 on each row (double and rea
Yes.
Both 9.1.8, I checked right now.
-- Miguel
A Qua, 13-07-2016 às 13:59 -0700, John R Pierce escreveu:
> On 7/13/2016 1:51 PM, Miguel Ramos wrote:
> > Finally, here are the log messages at the moment of the error.
> > It is clearly not while building indices.
> >
> > The table in question is
A Ter, 12-07-2016 às 13:08 +, Sameer Kumar escreveu:
> On Tue, 12 Jul 2016, 7:25 p.m. Miguel Ramos,
> wrote:
> > I found two relevant threads on the mailing-lists.
> > The most recent one sugested that postgresql was being configured
> > to use
> > more memory than what's available.
> > The
On 7/13/2016 1:51 PM, Miguel Ramos wrote:
Finally, here are the log messages at the moment of the error.
It is clearly not while building indices.
The table in question is a big one, 111GB.
Fields latitude, longitude and height are arrays of length around 500-
700 on each row (double and real).
Finally, here are the log messages at the moment of the error.
It is clearly not while building indices.
The table in question is a big one, 111GB.
Fields latitude, longitude and height are arrays of length around 500-
700 on each row (double and real).
So, what does this mean?
Was it the clien
On Tue, Jul 12, 2016 at 12:25:08PM +0100, Miguel Ramos wrote:
> > # pg_restore -d recovery /mnt/paysdeloire2013_convertida2.1.dump
> > pg_restore: [custom archiver] out of memory
> > 12:09:56.58 9446.593u+1218.508s 24.3% 167+2589k 6+0io 0pf+0sw 6968822cs
...
> I suspect that the restore fails
Miguel Ramos writes:
> This because I have the impression that it is during index creation,
> where I think client role would be minimal.
Hard to believe really, given the spelling of the message. But anyway,
be sure you do the run with log_statement = all so that it's clear what
is being worked
A Ter, 12-07-2016 às 11:58 -0400, Tom Lane escreveu:
>
> Anyway, it would be useful to try running the restore with a more
> modern
> version of pg_restore, to see if that helps.
>
> regards, tom lane
>
>
I have the scheduled restart tonight.
So, I will do the other test
Miguel Ramos writes:
> Às 15:40 de 12-07-2016, Tom Lane escreveu:
>> Unless you're running pg_restore under a really small ulimit, this would
>> seem to suggest some kind of memory leak in pg_restore itself. I wonder
>> how many objects in your dump (how long is "pg_restore -l" output)?
> pg_res
Às 16:23 de 12-07-2016, Miguel Ramos escreveu:
It looks to me like this error is pg_restore itself running out of
memory,
not reporting a server-side OOM condition. You could verify that by
looking in the server log to see whether any out-of-memory error appeared
there. But assuming that I'm r
Às 15:40 de 12-07-2016, Tom Lane escreveu:
Miguel Ramos writes:
We have backed up a database and now when trying to restore it to the
same server we get this:
# pg_restore -d recovery /mnt/paysdeloire2013_convertida2.1.dump
pg_restore: [custom archiver] out of memory
It looks to me like t
Miguel Ramos writes:
> We have backed up a database and now when trying to restore it to the
> same server we get this:
>>> # pg_restore -d recovery /mnt/paysdeloire2013_convertida2.1.dump
>>> pg_restore: [custom archiver] out of memory
It looks to me like this error is pg_restore itself runnin
On Tue, 12 Jul 2016, 7:25 p.m. Miguel Ramos, <
org.postgre...@miguel.ramos.name> wrote:
>
> Hi,
>
> We have backed up a database and now when trying to restore it to the
> same server we get this:
>
> > # pg_restore -d recovery /mnt/paysdeloire2013_convertida2.1.dump
> > pg_restore: [custom arch
2016-07-12 8:54 GMT-03:00 Miguel Ramos :
>
> Às 12:32 de 12-07-2016, Felipe Santos escreveu:
>
>> I would try lowering max_connections to 50 and then set work_mem to 128MB.
>>
>> After that restart your server and retry the restore.
>>
>
> Ok, I will try restarting tonight.
> work_mem is the param
Às 12:32 de 12-07-2016, Felipe Santos escreveu:
I would try lowering max_connections to 50 and then set work_mem to 128MB.
After that restart your server and retry the restore.
Ok, I will try restarting tonight.
work_mem is the parameter I was most afraid of.
I'll post some news in 24h...
-
2016-07-12 8:25 GMT-03:00 Miguel Ramos :
>
> Hi,
>
> We have backed up a database and now when trying to restore it to the same
> server we get this:
>
> > # pg_restore -d recovery /mnt/paysdeloire2013_convertida2.1.dump
> > pg_restore: [custom archiver] out of memory
> > 12:09:56.58 9446.593u+12
Vivek Khera writes:
you set your maxdsize to the same as on i386?
On the AMD64 I have /boot/loader.conf as
kern.ipc.semmni=256
kern.ipc.semmns=512
kern.ipc.semmnu=256
In i386 I believe I had set
kern.ipc.semmni=256
kern.ipc.semmns=512
kern.ipc.semmnu=256
kern.maxdsiz="1600MB"
kern.dfldsiz="1
On Jun 25, 2007, at 10:32 PM, Francisco Reyes wrote:
Hm... now I am really confused.
The same settings on AMD64 work. So how are "more resources
available" when I have the same amount of memory and the same
settings?
you set your maxdsize to the same as on i386? on even my smallest
am
Vivek Khera writes:
i wouldn't call it a bug to need more resources than you've got
available :-)
Hm... now I am really confused.
The same settings on AMD64 work. So how are "more resources available" when
I have the same amount of memory and the same settings?
I even tried a machine with
On Jun 25, 2007, at 9:33 PM, Francisco Reyes wrote:
Therefore, the problem is only with the i386 version.
Should I report this as a bug or is this "nornal" and expected?
i wouldn't call it a bug to need more resources than you've got
available :-) obviously the limits on the i386 version
Re-did the machine that was running out of memory and installed FreeBSD 6.2
AMD64.
The rows that used to fail now load.
Therefore, the problem is only with the i386 version.
Should I report this as a bug or is this "nornal" and expected?
---(end of broadcast)--
Alvaro Herrera writes:
The problem is probably the ulimit.
The problem occurrs on i386, but not on 64bit architecture.
Tested 5 machines.
3 i386 FreeBSD 6.2
2 AMD64 FreeBSD 6.2
The 64 bit machines, with postgresql compiled from ports, worked.
One of the machines had default OS limit. The se
Would it help at all to run a ktrace?
Or are the logs I have supplied enough?
---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster
Alvaro Herrera writes:
Can you send the program along? And the table definition (including
indexes, etc)?
I put the table definition and a Python program that reproduces the problem
at:
http://public.natserv.net/table-schema.sql
http://public.natserv.net/large_record.py
The program uses t
Alvaro Herrera writes:
The OS size is 1.6GB, but today I am going to try increasing kern.maxssiz.
Vivek recommended increasing it
kern.maxssiz is the maximum stack size.
Increased to 512MB. Didn't help.
The problem is probably the ulimit.
I did a shell script which continuously did ps aux
Francisco Reyes wrote:
> Tom Lane writes:
>
> >What's more, because the line and field buffers are StringInfos that are
> >intended for reuse across multiple lines/fields, they're not simply made
> >equal to the exact size of the big field. They're rounded up to the
> >next power-of-2, ie, if you
Tom Lane writes:
What's more, because the line and field buffers are StringInfos that are
intended for reuse across multiple lines/fields, they're not simply made
equal to the exact size of the big field. They're rounded up to the
next power-of-2, ie, if you've read an 84MB field during the cur
Francisco Reyes <[EMAIL PROTECTED]> writes:
> Also the error is about running out of memory when trying to allocate 84MB.
> The default FreeBSD limit is 512MB so 84MB is well below that.
Well, no, it's not.
I traced through a test case involving loading a multi-megabyte text
value, and what I fin
Francisco Reyes <[EMAIL PROTECTED]> writes:
> The issue I am trying to figure is which limit.. the OS limit is set to
> 1.6GB. I am now trying to increase my shared_buffers. So far have them at
> 450MB and it is still failing.
For this problem, increasing shared_buffers is either useless or
down
Tom Lane writes:
Keep in mind though that the COPY process is going to involve several
working copies of that data (at least four that I can think of ---
line input buffer, field input buffer, constructed text object, and
constructed tuple).
Will this be for the shared_buffers memory?
I'm al
Francisco Reyes <[EMAIL PROTECTED]> writes:
> Also the error is about running out of memory when trying to allocate 84MB.
> The default FreeBSD limit is 512MB so 84MB is well below that.
Keep in mind though that the COPY process is going to involve several
working copies of that data (at least fou
On Jun 18, 2007, at 2:10 PM, Francisco Reyes wrote:
Also the error is about running out of memory when trying to
allocate 84MB.
The default FreeBSD limit is 512MB so 84MB is well below that.
Try being less stingy than 128Mb for your stack. The default stack
is 512Mb.
--
Vivek Khera writes:
But that is a false assertion that the limit is 512Mb. On a random
system of mine running FreeBSD/i386 it shows the default data limit
as 1Gb, and on a random FreeBSD/amd64 box I see it at about 32Gb. I
do no global tweaking of the size limits.
Understood.
I only sho
On Jun 15, 2007, at 8:24 AM, Francisco Reyes wrote:
Understood. But at least it shows that the program was already
above the default of 512MB limit of the operating system.
But that is a false assertion that the limit is 512Mb. On a random
system of mine running FreeBSD/i386 it shows the
Tom Lane writes:
If that actually is the length of the line, the only answer is to raise
the memory ulimit setting the postmaster runs under.
Did another test to try to see if the problem is that row or the size of the
row.
Another record of greater size also failed.
Any ideas what this 84
Tom Lane writes:
Looked at the record in question and the length of the long column in that
row is 5753 (84MB).
If that actually is the length of the line, the only answer is to raise
the memory ulimit setting the postmaster runs under.
The memory limit is 1.6GB.
/boot/loader.conf
kern.
Francisco Reyes <[EMAIL PROTECTED]> writes:
> Tried a pg_dump without -Fc to see if I could get that one table loaded.
> Still failed.
> psql:message-attachments-2007-06-15.sql:2840177: ERROR: out of memory
> DETAIL: Failed on request of size 5765.
> CONTEXT: COPY message_attachments, line
Tried a pg_dump without -Fc to see if I could get that one table loaded.
Still failed.
psql:message-attachments-2007-06-15.sql:2840177: ERROR: out of memory
DETAIL: Failed on request of size 5765.
CONTEXT: COPY message_attachments, line 60660: "27202907225017
research/crew holds.
Tom Lane writes:
thought it was all one long line and tried to read it all in at once.
What's the history of your datafile --- has it maybe gotten passed
through a Windows newline conversion?
The database has emails.
The table in question holds the attachments. Each row represents one
attachm
Francisco Reyes <[EMAIL PROTECTED]> writes:
> Tom Lane writes:
>> Well, that's the last few lines of what I wanted, but not the part that
>> was interesting :-(. Please show all the lines that are like the "n
>> total in m blocks" format. Or at least look for the one(s) that contain
>> large numb
Tom Lane writes:
Well, that's the last few lines of what I wanted, but not the part that
was interesting :-(. Please show all the lines that are like the "n
total in m blocks" format. Or at least look for the one(s) that contain
large numbers...
Since you know best what you are looking for I
Francisco Reyes <[EMAIL PROTECTED]> writes:
> Tom Lane writes:
>> Can we see the context-sizes dump that should've come out right before
>> that in the log?
> Hope this is what you are looking for.
> Included a few lines before the error in case that is of any help.
Well, that's the last few line
Tom Lane writes:
Can we see the context-sizes dump that should've come out right before
that in the log?
Hope this is what you are looking for.
Included a few lines before the error in case that is of any help.
These lines are from the postgresql log. Redirected stderr to a file.
pg_amop_opr
Alvaro Herrera writes:
This is pg_restore's stderr. What Tom wants to see is postmaster's. It
is probably redirected (hopefully to a file, but regretfully it is
common to see it go to /dev/null) on the init script that starts the
pg_ctl -l didn't work.
Trying now with changes to postgreql.
Alvaro Herrera writes:
This is pg_restore's stderr. What Tom wants to see is postmaster's. It
is probably redirected (hopefully to a file, but regretfully it is
common to see it go to /dev/null) on the init script that starts the
service.
How would I run it manually?
When I do pg_ctl start,
Tom Lane writes:
Hmm. I'm wondering about a memory leak in the input converter. What
datatype exactly are the wide column(s)?
Text.
Also, do you have any ON INSERT triggers on this table?
No.
---(end of broadcast)---
TIP 3: Have you che
Francisco Reyes <[EMAIL PROTECTED]> writes:
> If it is of any help.. information about the data.
> It is hex64 encoded data. The original data getting encoded can be up to
> 100MB. Not sure how much bigger the hex encoding could be making it.
Hmm. I'm wondering about a memory leak in the input c
Francisco Reyes wrote:
> Tom Lane writes:
>
> >The memory dump only comes out on stderr (I think because of paranoia
> >about running out of memory while trying to report we're out of memory).
> >Can you get the postmaster's stderr output?
>
> >From stderr
>
> pg_restore: restoring data for tabl
Gregory Stark writes:
I'm skeptical that you can trust ps's VSZ reporting at this level of detail.
On some platforms VSZ includes a proportionate share of its shared memory or
might not include memory allocated but not actually written to yet (due to
copy-on-write).
Understood. But at least it
"Francisco Reyes" <[EMAIL PROTECTED]> writes:
> Gregory Stark writes:
>
>> You're right that your limit is above 128M but the error just means it tried
>> to allocated 128M and failed, it may already have allocated 400M and been
>> trying to go over the 524M mark.
>
> My limit should be 1.6GB a
Gregory Stark writes:
You're right that your limit is above 128M but the error just means it tried
to allocated 128M and failed, it may already have allocated 400M and been
trying to go over the 524M mark.
My limit should be 1.6GB as per /boot/loader.conf
I ran a ps every 20 seconds to try an
Tom Lane writes:
The memory dump only comes out on stderr (I think because of paranoia
about running out of memory while trying to report we're out of memory).
Can you get the postmaster's stderr output?
From stderr
pg_restore: restoring data for table "message_attachments"
pg_restore: [arc
"Francisco Reyes" <[EMAIL PROTECTED]> writes:
> There is no "ulimit -a" in cshell which is what I use.
> I guessed this may be a bash setting .. so tried that..
> The output of ulimit -a is:
The csh equivalent is just "limit".
> core file size (blocks, -c) unlimited
> data seg size
Francisco Reyes <[EMAIL PROTECTED]> writes:
> Tom Lane writes:
>> Can we see the context-sizes dump that should've come out right before
>> that in the log?
> Is this the one you need?
No. [squints...] Hm, you're looking at a syslog log, aren't you.
The memory dump only comes out on stderr (I t
Gregory Stark writes:
What does the output of "ulimit -a" show?
In FreeBSD, as far as I know, what controls the size of a program is the
/boot/loader.conf and /etc/login.conf
The default /etc/login.conf has unlimited size.
/boot/loader.conf is set to max program size of 1.6GB
Can you ar
"Francisco Reyes" <[EMAIL PROTECTED]> writes:
> kern.maxdsiz="1600MB" #1.6GB
> kern.dfldsiz="1600MB" #1.6GB
> kern.maxssiz="128M" # 128MB
It ought to be maxdsiz which seems large enough.
> The error that I got was:
> pg_restore: [archiver (db)] Error from TOC entry 2146; 0 988154 TABLE DATA
> m
Tom Lane writes:
Francisco Reyes <[EMAIL PROTECTED]> writes:
Syslog was:
Jun 14 10:17:56 bk20 postgres[7294]: [130-1] ERROR: out of memory
Jun 14 10:17:56 bk20 postgres[7294]: [130-2] DETAIL: Failed on request of size
134217728.
Can we see the context-sizes dump that should've come out rig
Francisco Reyes <[EMAIL PROTECTED]> writes:
> Syslog was:
> Jun 14 10:17:56 bk20 postgres[7294]: [130-1] ERROR: out of memory
> Jun 14 10:17:56 bk20 postgres[7294]: [130-2] DETAIL: Failed on request of
> size 134217728.
Can we see the context-sizes dump that should've come out right before
that
Never mind this, the file was mangled when sent non-binary over ftp
(windows ftp servers apparently still send ascii unless set type i is
specified :)
Cheers,
Palle
--On tisdag, april 17, 2007 12.49.46 +0200 Palle Girgensohn
<[EMAIL PROTECTED]> wrote:
Hi,
I get out of memory immediately
73 matches
Mail list logo