Hello! After splitting the the job into smaller pieces (e.g. 18x 1Mrow) the backend process now seems to release the memory after each subjob. Therefore the trigger queue seems to be a good candidate. Until now this queue was unknown to me.
Perhaps a note in the docu of COPY FROM and in the section "13.4.2 Use COPY FROM" within "Performance Tips" would prevent other people like me doing such bad things. Many thanks for the fast help. Andreas Heiduk Stephan Szabo <[EMAIL PROTECTED]> schrieb am 24.08.04 19:25:56: > > > On Tue, 24 Aug 2004, PostgreSQL Bugs List wrote: > > > I'm trying to COPY ~18Mrows into a table which has a foreign key to another > > table. Memory and swap are exhausted and finaly the postgres.log says: > > This is very possibly the space taken up by the trigger queue (which > cannot currently spill out to disk). If you load a smaller number of rows > does the space go up and then down after the copy ends? _______________________________________________________________ SMS schreiben mit WEB.DE FreeMail - einfach, schnell und kostenguenstig. Jetzt gleich testen! http://f.web.de/?mc=021192 ---------------------------(end of broadcast)--------------------------- TIP 7: don't forget to increase your free space map settings