On Sun, Jun 12, 2011 at 06:20:53PM -0400, Robert Haas wrote: > On Sun, Jun 12, 2011 at 3:18 PM, Noah Misch <n...@leadboat.com> wrote: > > Indeed, the original patch slowed it by about 50%. ?I improved the patch, > > adding a global SharedInvalidMessageCounter to increment as we process > > messages. ?If this counter does not change between the RangeVarGetRelid() > > call > > and the post-lock AcceptInvalidationMessages() call, we can skip the second > > RangeVarGetRelid() call. ?With the updated patch, I get these timings (in > > ms) > > for runs of "SELECT nmtest(10000000)": > > > > master: 19697.642, 20087.477, 19748.995 > > patched: 19723.716, 19879.538, 20257.671 > > > > In other words, no significant difference. ?Since the patch removes the > > no-longer-needed pre-lock call to AcceptInvalidationMessages(), changing to > > "relation_close(r, NoLock)" in the test case actually reveals a 15% > > performance improvement. ?Granted, nothing to get excited about in light of > > the artificiality. > > In point of fact, given the not-so-artificial results I just posted on > another thread ("lazy vxid locks") I'm *very* excited about trying to > reduce the cost of AcceptInvalidationMessages().
Quite interesting. A quick look suggests there is room for optimization there. > I haven't reviewed > your patch in detail, but is there a way we can encapsulate the > knowledge of the invalidation system down inside the sinval machinery, > rather than letting the heap code have to know directly about the > counter? Perhaps AIV() could return true or false depending on > whether any invalidation messages were processed, or somesuch. I actually did it exactly that way originally. The problem was the return value only applying to the given call, while I wished to answer a question like "Did any call to AcceptInvalidationMessages() between code point A and code point B process a message?" Adding AcceptInvalidationMessages() calls to code between A and B would make the return value test yield a false negative. A global counter was the best thing I could come up with that avoided this hazard. -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers