That model might make some sense if you think e.g. of a web application,
where the web server has a timeout for how long it waits to get a
database connection from a pool, but once a query is started, the
transaction is considered a succeess no matter how long it takes. The
latency limit would be that timeout. But I think a more useful model is
that when the user clicks a button, he waits at most X seconds for the
result. If that deadline is exceeded, the web server will give a 404, or
the user will simply get bored and go away, and the transaction is
considered a failure.
Correct, the whole TPC-B model better fits an application where client
requests enter a queue at the specified TPS rate and that queue is processed.
While we are at it,
Note that in the original TPC-B specification the transaction duration
measured is the time from receiving the client request (in current pgbench
under throttling that is for when the transaction is scheduled) and when the
request is answered. This is the client visible response time, which has
nothing to do with the database latency.
Ok. This correspond to the definition used in the current patch. However
ISTM that the tpc-b bench is "as fast as possible", there is no underlying
schedule as with the throttled pgbench.
As per TPC-B, the entire test is only valid if 90% of all client response
times are within 2 seconds.
It would be useful if pgbench would
A) measure and report that client response time in the per transaction
log files and
I never used the per transaction log file. I think that it may already be
the case that it contains this information when not throttled. When under
throttling, I do not know.
B) report at the end what percentage of transactions finished within
a specified response time constraint (default 2 seconds).
What is currently reported is the complement (% of transactions completed
over the time limit).
Note that despite pg appaling latency performance, in may stay well over
the 90% limit, or even 100%: when things are going well a lot of
transaction run in about ms, while when things are going bad transactions
would take a long time (although possibly under or about 1s anyway), *but*
very few transactions are passed, the throughput is very small. The fact
that during 15 seconds only 30 transactions are processed is a detail that
does not show up in the metric.
--
Fabien.
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers