Hi,

I wanted to run some performance tests for [1] and looked at the profile of a
workload with a lot of short queries. And was rather surprised to see
pgstat_report_stat() to be the top entry - that certainly didn't use to be the
case.

For the, obviously rather extreme, workload of a pgbench session just running
  SELECT 1;
I see about a 6% regression from 17.

That seems too high, even for such an absurd workload.


It's one thing to have a loop over all potential stats kinds when we know
there are pending stats, that's not that frequent. But fc415edf8ca8 changed a
test consisting out of 3 checks of a variable and one direct function call to
a loop over an array with 256 elements, with a number of indirect function
calls - all of that happens *before* pgstat_report_stat() decides that there
are no stats to report.

It seems rather unsurprising that that causes a slowdown.

The pre-check is there to:
        /* Don't expend a clock check if nothing to do */

but you made it way more expensive than a clock check would have been (not
counting old vmware installs or such, where clock checks take ages).

If I change the loop count to only be the builtin stats kinds, the overhead
goes away almost completely.

Greetings,

Andres Freund

[1] 
https://www.postgresql.org/message-id/aGKSzFlpQWSh%2F%2B2w%40ip-10-97-1-34.eu-west-3.compute.internal


Reply via email to