On Fri, Feb 03, 2006 at 19:38:04 +0100, Patrick Rotsaert <[EMAIL PROTECTED]> wrote: > > I have 5.1GB of free disk space. If this is the cause, I have a > problem... or is there another way to extract (and remove) duplicate rows?
How about processing a subset of the ids in one pass and then may make multiple passes to check all of the ids. As long as you don't have to use too small of chunks, this might work for you. ---------------------------(end of broadcast)--------------------------- TIP 4: Have you searched our list archives? http://archives.postgresql.org