Hello Folx!

We are running into a problem that I wanted to run by the group before I report 
it as a bug.

We have a medium sized database that when dumped creates +4G files within the 
tar archive.  When we restore it seems that pg_restore has a 4G limit for 
reading files, once it reads 4G of a file, it moves on to the next file.  Has 
anyone else experienced this problem?  Is there a documented way for mitigating 
this issue that I have not found?  Is this a bug or are we doing something 
incorrectly to cause this?  It seems to me if pg_restore has a hard limit of 4G 
filesize then pg_dump should have the same limit.

TIA

-bill


---------------------------(end of broadcast)---------------------------
TIP 9: In versions below 8.0, the planner will ignore your desire to
       choose an index scan if your joining column's datatypes do not
       match

Reply via email to