On 09/07/10 14:26, Robert Haas wrote:
On Thu, Jul 8, 2010 at 10:21 PM, Mark Kirkwood
<mark.kirkw...@catalyst.net.nz> wrote:
Purely out of interest, since the old repo is still there, I had a quick
look at measuring the overhead, using 8.4's pgbench to run two custom
scripts: one consisting of a single 'SELECT 1', the other having 100 'SELECT
1' - the latter being probably the worst case scenario. Running 1,2,4,8
clients and 1000-10000 tramsactions gives an overhead in the 5-8% range [1]
(i.e transactions/s decrease by this amount with the scheduler turned on
[2]). While a lot better than 30% (!) it is certainly higher than we'd like.
Isn't the point here to INCREASE throughput?
LOL - yes it is! Josh wanted to know what the overhead was for the queue
machinery itself, so I'm running a test to show that (i.e so I have a
queue with the thresholds set higher than the test will load them).
In the situation where (say) 11 concurrent queries of a certain type
make your system become unusable, but 10 are fine, then constraining it
to have a max of 10 will tend to improve throughput. By how much is hard
to say, for instance preventing the Linux OOM killer shutting postgres
down would be infinite I guess :-)
Cheers
Mark
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers