Missing RHEL rpm(pg_catcheck-11) in postgres 11 repo.

2018-11-10 Thread Chatterjee, Satyajit
Hello team,

We are looking to upgrade our application from Postgres 9.5 to 11x, but we are 
unable to find the RPM named 'pg_catcheck-11x' for Redhat:  
https://download.postgresql.org/pub/repos/yum/11/redhat/rhel-7.5-x86_64/.

We request your help in getting this rpm to the repo.

Thanks,
-Satyajit



pg_dump out of memory for large table with LOB

2018-11-10 Thread Jean-Marc Lessard
I am running PostgreSQL 9.6.5 on x86_64-pc-mingw64, compiled by gcc.exe (Rev5, 
Built by MSYS2 project) 4.9.2, 64-bit
on win2012 with 12Gb RAM
The dumped table is 0.5TB, 17 million rows and LOB uses about 99% of the space.
The pg_dump consumes the entire system memory and swap, then terminates with 
out of memory error
Is it a bug or normal behavior?
If I do not include LOB in the dump, it works fine.

Here is the dump output:
C:\Users\Administrator> pg_dump -h localhost -Fc -a -b -t signatures -v > 
d:\postgresql\sig.dmp
pg_dump: last built-in OID is 16383
...
pg_dump: reading row security enabled for table "ibisl1.signatures"
pg_dump: reading policies for table "ibisl1.signatures"
pg_dump: reading large objects
pg_dump: reading dependency data
pg_dump: saving encoding = UTF8
pg_dump: saving standard_conforming_strings = on
out of memory

Jean-Marc Lessard
Administrateur de base de donn?es / Database Administrator
Ultra Electronics Forensic Technology Inc.
T +1 514 489 4247 x4164
www.ultra-forensictechnology.com


Re: pg_dump out of memory for large table with LOB

2018-11-10 Thread Ron

On 11/09/2018 05:49 PM, Jean-Marc Lessard wrote:


I am running PostgreSQL 9.6.5 on x86_64-pc-mingw64, compiled by gcc.exe 
(Rev5, Built by MSYS2 project) 4.9.2, 64-bit


on win2012 with 12Gb RAM

The dumped table is 0.5TB, 17 million rows and LOB uses about 99% of the 
space.


The pg_dump consumes the entire system memory and swap, then terminates 
with out of memory error


Is it a bug or normal behavior?

If I do not include LOB in the dump, it works fine.

Here is the dump output:

C:\Users\Administrator> pg_dump -h localhost -Fc -a -b -t signatures -v > 
d:\postgresql\sig.dmp


pg_dump: last built-in OID is 16383

...

pg_dump: reading row security enabled for table "ibisl1.signatures"

pg_dump: reading policies for table "ibisl1.signatures"

pg_dump: reading large objects

pg_dump: reading dependency data

pg_dump: saving encoding = UTF8

pg_dump: saving standard_conforming_strings = on

out of memory



This looks similar to the recent thread "Trouble Upgrading Postgres".
https://www.postgresql.org/message-id/flat/CAFw6%3DU2oz9rTF0qa0LFMg91bu%3Dhdisfu2-xXU1%3D%3DD7yBif%2B2uw%40mail.gmail.com

Specifically, message ce239c9c-68f2-43e6-a6b6-81c66d0f4...@manitou-mail.org 



"The hex expansion performed by COPY must allocate twice that size,
plus the rest of the row, and if that resulting size is above 1GB, it
will error out with the message you mentioned upthread:
ERROR: invalid memory alloc request size .
So there's no way it can deal with the contents over 500MB, and the
ones just under that limit may also be problematic."


And message 89b5b622-4c79-4c95-9ad4-b16d0d0da...@manitou-mail.org

"It's undoubtedly very annoying that a database can end up with

non-pg_dump'able contents, but it's not an easy problem to solve. Some time 
ago, work was done to extend the 1GB limit but eventually it got scratched. 
The thread in [1] discusses many details of the problem and why the proposed 
solution were mostly a band aid. Basically, the specs of COPY and other 
internal aspects of Postgres are from the 32-bit era when putting the size 
of an entire CDROM in a single row/column was not anticipated as a valid use 
case. It's still a narrow use case today and applications that need to store 
big pieces of data like that should slice them in chunks, a bit like in 
pg_largeobject, except in much larger chunks, like 1MB.


[1] pg_dump / copy bugs with "big lines" ? 
https://www.postgresql.org/message-id/1836813.YmyOrS99PX%40ronan.dunklau.fr 



 "



*Jean-Marc Lessard*
Administrateur de base de données / Database Administrator
Ultra Electronics Forensic Technology Inc.
*T* +1 514 489 4247 x4164
www.ultra-forensictechnology.com 



--
Angular momentum makes the world go 'round.


Re: pg_dump out of memory for large table with LOB

2018-11-10 Thread Adrian Klaver

On 11/10/18 2:46 PM, Ron wrote:

On 11/09/2018 05:49 PM, Jean-Marc Lessard wrote:


I am running PostgreSQL 9.6.5 on x86_64-pc-mingw64, compiled by 
gcc.exe (Rev5, Built by MSYS2 project) 4.9.2, 64-bit


on win2012 with 12Gb RAM

The dumped table is 0.5TB, 17 million rows and LOB uses about 99% of 
the space.


The pg_dump consumes the entire system memory and swap, then 
terminates with out of memory error


Is it a bug or normal behavior?

If I do not include LOB in the dump, it works fine.

Here is the dump output:

C:\Users\Administrator> pg_dump -h localhost -Fc -a -b -t signatures 
-v > d:\postgresql\sig.dmp


pg_dump: last built-in OID is 16383

...

pg_dump: reading row security enabled for table "ibisl1.signatures"

pg_dump: reading policies for table "ibisl1.signatures"

pg_dump: reading large objects

pg_dump: reading dependency data

pg_dump: saving encoding = UTF8

pg_dump: saving standard_conforming_strings = on

out of memory



This looks similar to the recent thread "Trouble Upgrading Postgres".
https://www.postgresql.org/message-id/flat/CAFw6%3DU2oz9rTF0qa0LFMg91bu%3Dhdisfu2-xXU1%3D%3DD7yBif%2B2uw%40mail.gmail.com

Specifically, message 
ce239c9c-68f2-43e6-a6b6-81c66d0f4...@manitou-mail.org 



"The hex expansion performed by COPY must allocate twice that size,
plus the rest of the row, and if that resulting size is above 1GB, it
will error out with the message you mentioned upthread:
ERROR: invalid memory alloc request size .
So there's no way it can deal with the contents over 500MB, and the
ones just under that limit may also be problematic."


I don't this is the case. The above is an issue because of the maximum 
length of a string that Postgres can process. LO's are different creatures:


https://www.postgresql.org/docs/11/lo-implementation.html


It would help to see the memory configuration values set for the cluster:

https://www.postgresql.org/docs/11/lo-implementation.html




And message 89b5b622-4c79-4c95-9ad4-b16d0d0da...@manitou-mail.org

"It's undoubtedly very annoying that a database can end up with

non-pg_dump'able contents, but it's not an easy problem to solve. Some 
time ago, work was done to extend the 1GB limit but eventually it got 
scratched. The thread in [1] discusses many details of the problem and 
why the proposed solution were mostly a band aid. Basically, the specs 
of COPY and other internal aspects of Postgres are from the 32-bit era 
when putting the size of an entire CDROM in a single row/column was not 
anticipated as a valid use case. It's still a narrow use case today and 
applications that need to store big pieces of data like that should 
slice them in chunks, a bit like in pg_largeobject, except in much 
larger chunks, like 1MB.


[1] pg_dump / copy bugs with "big lines" ? 
https://www.postgresql.org/message-id/1836813.YmyOrS99PX%40ronan.dunklau.fr 



  "



*Jean-Marc Lessard*
Administrateur de base de données / Database Administrator
Ultra Electronics Forensic Technology Inc.
*T* +1 514 489 4247 x4164
www.ultra-forensictechnology.com 



--
Angular momentum makes the world go 'round.



--
Adrian Klaver
adrian.kla...@aklaver.com