[EMAIL PROTECTED] (Tom Lane) writes: > Mats Lofkvist <[EMAIL PROTECTED]> writes: > > But when doing ~1000 updates (i.e. setting val0 and val1 with > > a where on an existing key0/key1/key2 triplet), I get this which > > seems very strange to me: > > I suppose you repeatedly updated the same row 1000 times? That creates > an O(N^2) behavior because the dead tuples have to be rechecked again > and again. > > 7.3 will be smarter about this. >
Seems like I get the same behaviour with 7.3 beta1, updating the same row ~20k times and then 1k times more with profiling enabled (and with no vacuum in between) gives: ----------------------------------------------- 2.72 166.12 1002/1002 _bt_doinsert [17] [18] 53.7 2.72 166.12 1002 _bt_check_unique [18] 15.81 149.01 21721926/21721926 _bt_isequal [19] 0.05 1.00 221414/412979 _bt_getbuf [40] 0.01 0.21 221414/409772 _bt_relbuf [91] 0.01 0.02 2709/6241 heap_fetch [187] 0.00 0.00 5418/2726620 LockBuffer [50] 0.00 0.00 1002/65406369 _bt_binsrch <cycle 1> [270] 0.00 0.00 2709/1333460 ReleaseBuffer [76] 0.00 0.00 2709/4901 HeapTupleSatisfiesVacuum [519] 0.00 0.00 1707/4910 SetBufferCommitInfoNeedsSave [652] ----------------------------------------------- (In my case, I think the call to _bt_check_unique could be avoided altogether since the update isn't changing any of the columns present in the unique key. But doing this optimization maybe is much harder than just trying to avoid checking the dead tuples over and over again?) _ Mats Lofkvist [EMAIL PROTECTED] ---------------------------(end of broadcast)--------------------------- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/users-lounge/docs/faq.html