On Thu, 2009-10-08 at 11:59 -0400, Robert Haas wrote: > On Thu, Oct 8, 2009 at 11:50 AM, Tom Lane <t...@sss.pgh.pa.us> wrote: > >> Another possible approach, which isn't perfect either, is the idea of > >> allowing COPY to generate a single column of output of type text[]. > >> That greatly reduces the number of possible error cases, and at least > >> gets the data into the DB where you can hack on it. But it's still > >> going to be painful for some use cases. > > > > Yeah, that connects to the previous discussion about refactoring COPY > > into a series of steps that the user can control. > > > > Ultimately, there's always going to be a tradeoff between speed and > > flexibility. It may be that we should just say "if you want to import > > dirty data, it's gonna cost ya" and not worry about the speed penalty > > of subtransaction-per-row. But that still leaves us with the 2^32 > > limit. I wonder whether we could break down COPY into sub-sub > > transactions to work around that... > > How would that work? Don't you still need to increment the command counter?
Couldn't you just commit each range of subtransactions based on some threshold? COPY foo from '/tmp/bar/' COMMIT_THRESHOLD 1000000; It counts to 1mil, commits starts a new transaction. Yes there would be 1million sub transactions but once it hits those clean, it commits. ? Joshua D. Drake > > ...Robert > -- PostgreSQL.org Major Contributor Command Prompt, Inc: http://www.commandprompt.com/ - 503.667.4564 Consulting, Training, Support, Custom Development, Engineering If the world pushes look it in the eye and GRR. Then push back harder. - Salamander -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers