On 6 January 2017 at 02:59, Bruce Momjian <br...@momjian.us> wrote: > > Agreed. No need in adding overhead for short-lived locks because the > milli-second values are going to be meaningless to users. I would be > happy if we could find some weasel value for non-heavyweight locks.
For what it's worth I don't think this is true. It may be true that we don't need precise measurements of short-lived locks but we do need accurate measurements even if they're in the expected range. What users need to know is in aggregate how much of the time the database is spending working on their queries is going into different states. Even if no LWLock is held for more than a few milliseconds at a time if it turns out that 80% of the time is being spend in waiting on LWLocks then there's no point in worrying about cpu speed, for example. And knowing *which* LWLock is taking up an aggregate of 80% of time would point at either configuration changes or code changes to reduce that contention. I would actually argue the reverse of the above proposal would be more useful. What we need are counts of how often LWLocks take longer than, say, 50ms and for shorter waits we need to know how long. Perhaps not precisely for individual waits but in aggregate we need the totals to be right so as long as the measurements are accurate that would be sufficient. So an accurate but imprecise measurement +/- 10ms with low overhead is better than a precise measurement with high overhead. -- greg -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers