Robert Haas <robertmh...@gmail.com> writes: > ... this again is my point: why can't we make the same argument about > two spinlocks situated on the same cache line? I don't have a bit of > trouble believing that doing the same thing with a couple of spinlocks > could sometimes work out well, too, but Tom is adamantly opposed to > that.
I think you might be overstating my position here. What I'm concerned about is that we be sure that spinlocks are held for a sufficiently short time that it's very unlikely that we get pre-empted while holding one. I don't have any particular bright line about how short a time that is, but more than "a few instructions" worries me. As you say, the Linux kernel is a bad example to follow because it hasn't got a risk of losing its timeslice while holding a spinlock. The existing coding rules discourage looping (though I might be okay with a small constant loop count), and subroutine calls (mainly because somebody might add $random_amount_of_work to the subroutine if they don't realize it can be called while holding a spinlock). Both of these rules are meant to reduce the risk that a short interval balloons into a long one due to unrelated coding changes. The existing coding rules also discourage spinlocking within a spinlock, and the reason for that is that there's no very clear upper bound to the time required to obtain a spinlock, so that there would also be no clear upper bound to the time you're holding the original one (thus possibly leading to cascading performance problems). So ISTM the question we ought to be asking is whether atomic operations have bounded execution time, or more generally what the distribution of execution times is likely to be. I'd be OK with an answer that includes "sometimes it can be long" so long as "sometimes" is "pretty damn seldom". Spinlocks have a nonzero risk of taking a long time already, since we can't entirely prevent the possibility of losing our timeslice while holding one. The issue here is just to be sure that that happens seldom enough that it doesn't cause performance problems. If we fail to do that we might negate all the desired performance improvements from adopting atomic ops in the first place. regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers