Re: [PERFORM] Slowing UPDATEs inside a transaction

2011-03-04 Thread Merlin Moncure
On Fri, Mar 4, 2011 at 8:20 AM, Robert Haas wrote: > On Fri, Mar 4, 2011 at 4:21 AM, Matt Burke wrote: >> Robert Haas wrote: >>> Old row versions have to be kept around until they're no longer of >>> interest to any still-running transaction. >> >> Thanks for the explanation. >> >> Regarding the

Re: [PERFORM] Slowing UPDATEs inside a transaction

2011-03-04 Thread Robert Haas
On Fri, Mar 4, 2011 at 4:21 AM, Matt Burke wrote: > Robert Haas wrote: >> Old row versions have to be kept around until they're no longer of >> interest to any still-running transaction. > > Thanks for the explanation. > > Regarding the snippet above, why would the intermediate history of > multip

Re: [PERFORM] Slowing UPDATEs inside a transaction

2011-03-04 Thread Matt Burke
Robert Haas wrote: > Old row versions have to be kept around until they're no longer of > interest to any still-running transaction. Thanks for the explanation. Regarding the snippet above, why would the intermediate history of multiply-modified uncommitted rows be of interest to anything, or is

Re: [PERFORM] Slowing UPDATEs inside a transaction

2011-03-03 Thread Merlin Moncure
On Thu, Mar 3, 2011 at 8:26 AM, Robert Haas wrote: > On Thu, Mar 3, 2011 at 9:13 AM, Matt Burke wrote: >> Hi. I've only been using PostgreSQL properly for a week or so, so I >> apologise if this has been covered numerous times, however Google is >> producing nothing of use. >> >> I'm trying to im

Re: [PERFORM] Slowing UPDATEs inside a transaction

2011-03-03 Thread Robert Haas
On Thu, Mar 3, 2011 at 9:13 AM, Matt Burke wrote: > Hi. I've only been using PostgreSQL properly for a week or so, so I > apologise if this has been covered numerous times, however Google is > producing nothing of use. > > I'm trying to import a large amount of legacy data (billions of > denormali

[PERFORM] Slowing UPDATEs inside a transaction

2011-03-03 Thread Matt Burke
Hi. I've only been using PostgreSQL properly for a week or so, so I apologise if this has been covered numerous times, however Google is producing nothing of use. I'm trying to import a large amount of legacy data (billions of denormalised rows) into a pg database with a completely different schem