, that which can be adequately explained by
> stupidity"
> - Hanlon's Razor
>
>
> Original Message
> Subject: Re: Trouble Upgrading Postgres
> From: Tom Lane
> Date: Tue, November 06, 2018 11:53 am
> To: Adrian Klaver
> Cc: Daniel Verite , Charl
ncan - Business Network Solutions, Inc. 336 Elton Road Jackson MS, 39212"Never attribute to malice, that which can be adequately explained by stupidity"- Hanlon's Razor
Original Message
Subject: Re: Trouble Upgrading Postgres
From: Tom Lane <t...@sss.pgh.pa.us>
Adrian Klaver writes:
> On 11/6/18 8:27 AM, Daniel Verite wrote:
>> Adrian Klaver wrote:
>>> To me that looks like a bug, putting data into a record you cannot get out.
>> Strictly speaking, it could probably get out with COPY in binary format,
>> but pg_dump doesn't use that.
Another possibilit
On 11/6/18 8:27 AM, Daniel Verite wrote:
Adrian Klaver wrote:
So there's no way it can deal with the contents over 500MB, and the
ones just under that limit may also be problematic.
To me that looks like a bug, putting data into a record you cannot get out.
Strictly speaking, it cou
Adrian Klaver wrote:
> > So there's no way it can deal with the contents over 500MB, and the
> > ones just under that limit may also be problematic.
>
> To me that looks like a bug, putting data into a record you cannot get out.
Strictly speaking, it could probably get out with COPY in b
On 11/6/18 3:47 AM, Daniel Verite wrote:
Charles Martin wrote:
but the second one returned this:
0 "623140"
1 "53"
2 "12"
3 "10"
4 "1"
5 "1"
7 "1"
[null] "162"
Not quite sure what that means, but if there is just a small number of
overly-large records, I might be able to delete them
Thanks, Daniel.
Using your idea, I found the records over 400MB, and deleted them in the
application. The largest two were in inactive matters, and the third is
still available elsewhere if needed. I'll try pg_dump again after work
hours and see if it works now. Hopefully it will, now that I've
A
Charles Martin wrote:
> but the second one returned this:
>
> 0 "623140"
> 1 "53"
> 2 "12"
> 3 "10"
> 4 "1"
> 5 "1"
> 7 "1"
> [null] "162"
>
> Not quite sure what that means, but if there is just a small number of
> overly-large records, I might be able to delete them. If I can find the
On 11/5/18 8:50 AM, Charles Martin wrote:
Daniel said:
This appears to be the case. I ran:
SELECT max(length(docfilecontents::text)) FROM docfile;
and after a very long time, got:
ERROR: invalid memory alloc request size 1636085512 SQL state: XX000
Adrian said:
Ok, thanks for explaining
The first query timed out, but the second one returned this:
0 "623140"
1 "53"
2 "12"
3 "10"
4 "1"
5 "1"
7 "1"
[null] "162"
Not quite sure what that means, but if there is just a small number of
overly-large records, I might be able to delete them. If I can find them.
On Mon, Nov 5, 2018 at 12:5
Charles Martin wrote:
> SELECT max(length(docfilecontents::text)) FROM docfile;
> and after a very long time, got:
> ERROR: invalid memory alloc request size 1636085512 SQL state: XX000
It would mean that at least one row has a "docfilecontents"
close to 0.5GB in size. Or that the size fi
On 11/05/2018 11:06 AM, Charles Martin wrote:
Ron said:
>We were expecting the output of the "SHOW SHARED_BUFFERS;" command.
Ok, the result from that command is:
1GB
>The system has only 4GB of RAM. I read that a reasonable swap size is 1/4 of
RAM, so I've created a swap file of 1GB.
>Times
Ron said:
>We were expecting the output of the "SHOW SHARED_BUFFERS;" command.
Ok, the result from that command is:
1GB
>The system has only 4GB of RAM. I read that a reasonable swap size is 1/4
of RAM, so I've created a swap file of 1GB.
>Times have changed... (I'd have made a 4GB swap file.
On 11/05/2018 10:50 AM, Charles Martin wrote:
[snip]
The results I pasted were from:
SELECT * FROM pg_settings
Maybe I didn't get it the right way.
We were expecting the output of the "SHOW SHARED_BUFFERS;" command.
The system has only 4GB of RAM. I read that a reasonable swap size is 1/4
Daniel said:
>It's plausible that, with only 4GB of RAM, the table that fails
to dump has some very large rows that can't be allocated, especially
since both the backend and pg_dump need to have it simultaneously
in memory.
> >pg_dump: The command was: COPY public.docfile (docfile_pkey,
> >docfile
Charles Martin writes:
> Ok, thanks for explaining this. Here is the current value:
> "shared_buffers" "131072" "8kB"
Well, that's 1GB, which might be ambitious inside a VM with a hard
restriction to 4GB total RAM. Postgres can get by with a *lot* less.
Try knocking it down to a tenth of that an
On 11/5/18 7:04 AM, Charles Martin wrote:
Adrian said:
>Tom was referring to this from your previous post:
>(postmaster) total-vm:3068900kB,
>where vm(VM) is Virtual Memory:
>https://stackoverflow.com/questions/18845857/what-does-anon-rss-and-total-vm-mean#22326766
>So what is your shared_
Adrian said:
>Tom was referring to this from your previous post:
>(postmaster) total-vm:3068900kB,
>where vm(VM) is Virtual Memory:
>
https://stackoverflow.com/questions/18845857/what-does-anon-rss-and-total-vm-mean#22326766
>So what is your shared_buffers:
>https://www.postgresql.org/docs/10/stat
Charles Martin wrote:
> >So where is the server located relative to the pg_dump client?
> >On the same machine?
> >If so is it a virtual machine e.g AWS?
> >Across a local or remote network?
>
>
> I gave the command in a terminal session after SSHing to the server from
> the same network
On 11/5/18 5:56 AM, Charles Martin wrote:
Tom said:
>That's kind of odd: a COPY shouldn't really consume very much working
>memory. I suspect that much of the process's apparent VM consumption may
>be shared buffers ... what have you got shared_buffers set to on the old
>server? If it's mo
Tom said:
>That's kind of odd: a COPY shouldn't really consume very much working
>memory. I suspect that much of the process's apparent VM consumption may
>be shared buffers ... what have you got shared_buffers set to on the old
>server? If it's more than half a GB or so, maybe reducing it would
On 11/4/18 2:55 PM, Charles Martin wrote:
Yep, you called it:
Nov 2 20:30:45 localhost kernel: Out of memory: Kill process 30438
(postmaster) score 709 or sacrifice child
Nov 2 20:30:45 localhost kernel: Killed process 30438, UID 26,
(postmaster) total-vm:3068900kB, anon-rss:1695392kB, file-
Charles Martin writes:
> Yep, you called it:
> Nov 2 20:30:45 localhost kernel: Out of memory: Kill process 30438
> (postmaster) score 709 or sacrifice child
> Nov 2 20:30:45 localhost kernel: Killed process 30438, UID 26,
> (postmaster) total-vm:3068900kB, anon-rss:1695392kB, file-rss:1074692kB
Not enough swap space?
On 11/04/2018 04:55 PM, Charles Martin wrote:
Yep, you called it:
Nov 2 20:30:45 localhost kernel: Out of memory: Kill process 30438
(postmaster) score 709 or sacrifice child
Nov 2 20:30:45 localhost kernel: Killed process 30438, UID 26,
(postmaster) total-vm:3068900k
Yep, you called it:
Nov 2 20:30:45 localhost kernel: Out of memory: Kill process 30438
(postmaster) score 709 or sacrifice child
Nov 2 20:30:45 localhost kernel: Killed process 30438, UID 26,
(postmaster) total-vm:3068900kB, anon-rss:1695392kB, file-rss:1074692kB
So it's running out of memory w
On 11/4/18 8:38 AM, Charles Martin wrote:
Adtrian said:
pg_dump: Error message from server: server closed the connection
unexpectedly
>Is this error the client reporting?
>Is this the same that is showing up in the server log?
Yes, that's the client message, i.e. what appeared in the termi
Am 04.11.2018 um 17:38 schrieb Charles Martin:
Andreas said:
>which exact minor version please?
PostgreSQL 9.6.10 on x86_64-pc-linux-gnu, compiled by gcc (GCC) 4.4.7
20120313 (Red Hat 4.4.7-23), 64-bit
okay, i asked to just rule out a stale version. 9.6.7, for instance,
contains som
Adtrian said:
>> pg_dump: Error message from server: server closed the connection
>> unexpectedly
>Is this error the client reporting?
>Is this the same that is showing up in the server log?
Yes, that's the client message, i.e. what appeared in the terminal window
that gave the command. The serve
Am 03.11.2018 um 23:47 schrieb Charles Martin:
When I do a pg_dump using PG 9.6, I got this:
pg_dump: Dumping the contents of table "docfile" failed:
PQgetCopyData() failed.
pg_dump: Error message from server: server closed the connection
unexpectedly
This probably means the server term
Charles Martin wrote:
> When I do a pg_dump using PG 9.6, I got this:
>
> > pg_dump: Dumping the contents of table "docfile" failed: PQgetCopyData()
> > failed.
> > pg_dump: Error message from server: server closed the connection
> > unexpectedly
> > This probably means the server terminated
On 11/3/18 3:47 PM, Charles Martin wrote:
When I do a pg_dump using PG 9.6, I got this:
pg_dump: Dumping the contents of table "docfile" failed:
PQgetCopyData() failed.
pg_dump: Error message from server: server closed the connection
unexpectedly
Is this error the client reporting?
Is thi
When I do a pg_dump using PG 9.6, I got this:
pg_dump: Dumping the contents of table "docfile" failed: PQgetCopyData()
failed.
pg_dump: Error message from server: server closed the connection
unexpectedly
This probably means the server terminated abnormally
before or while processing the reques
On 11/3/18 12:57 PM, Charles Martin wrote:
I'd be grateful for some help. I am trying to move a large database from
PostgreSQL 9.6 on Centos 6 to a different server using PostgreSQL 11 on
Centos 7. I can't do a pg_dump because it always fails on the largest
table.
I would answer Ron's questi
On 11/03/2018 02:57 PM, Charles Martin wrote:
I'd be grateful for some help. I am trying to move a large database from
PostgreSQL 9.6 on Centos 6 to a different server using PostgreSQL 11 on
Centos 7. I can't do a pg_dump because it always fails on the largest table.
What error message?
--
A
I'd be grateful for some help. I am trying to move a large database from
PostgreSQL 9.6 on Centos 6 to a different server using PostgreSQL 11 on
Centos 7. I can't do a pg_dump because it always fails on the largest
table. So tried to do pb_basebackup and copy that to the new PG 11 server.
Except th
35 matches
Mail list logo