Hi All, I too have performed benchmarking of this patch on a large machine (with 128 CPU(s), 520GB RAM, intel x86-64 architecture) and would like to share my observations for the same (Please note that, as I had to reverify readings on few client counts, it did take some time for me to share these test-results.)
Case1: Data fits in shared buffer, Read Only workload: ------------------------------------------------------------------------------- For data fits in shared buffer, I have taken readings at 300 SF. The result sheet 'results-readonly-300-1000-SF' containing the median of 3 runs is attached with this mail. In this case, I could see very good performance improvement with the patch basically at high client counts (156 - 256). Case2: Data doesn't fit in shared buffer, Read Only workload: -------------------------------------------------------------------------------------- For data doesn't fit in shared buffer, I have taken readings at 1000 SF. The result sheet 'results-readonly-300-1000-SF' is attached with this mail. In this case, the performance improvement is not as in Case1, Infact it just varies in the range of 2-7%. but the good thing is that there is no regression. Case3: Data fits in shared buffer, Read-write workload: ----------------------------------------------------------------------------- In this case, I could see that the tps on head and patch are very close to each other with a small variation of (+-)3-4% which i assume is a run-to-run variation. PFA result sheet 'results-readwrite-300-1000-SF' containing the test-results. Case4: Data doesn't fit in shared buffer, Read-write workload: ---------------------------------------------------------------------------------------- In this case as well, tps on head and patch are very close to each other with small variation of 1-2% which again is a run-to-run variation. PFA result sheet 'results-readwrite-300-1000-SF' containing the test-results. Please note that details on the non-default guc params and pgbench is all provided in the result sheets attached with this mail. Also, I have not used pg_prewarm in my test script. Thank you. On Mon, Feb 13, 2017 at 9:43 PM, Bernd Helmle <maili...@oopsware.de> wrote: > Am Montag, den 13.02.2017, 16:55 +0300 schrieb Alexander Korotkov: >> >> Thank you for testing. >> >> Yes, influence seems to be low. But nevertheless it's important to >> insure >> that there is no regression here. >> Despite pg_prewarm'ing and running tests 300s, there is still >> significant >> variation. >> For instance, with clients count = 80: >> * pgxact-result-2.txt – 474704 >> * pgxact-results.txt – 574844 > >> Could some background processes influence the tests? Or could it be >> another virtual machine? >> Also, I wonder why I can't see this variation on the graphs. >> Another issue with graphs is that we can't see details of read and >> write > > Whoops, good catch. I've mistakenly copied the wrong y-axis for these > results in the gnuplot script, shame on me. New plots attached. > > You're right, the 2nd run with the pgxact alignment patch is notable. > I've realized that there was a pgbouncer instance running from a > previous test, but not sure if that could explain the difference. > >> TPS variation on the same scale, because write TPS values are too >> low. I >> think you should draw write benchmark on the separate graph. >> > > The Linux LPAR is the only one used atm. We got some more time for > Linux now and i'm going to prepare Tomas' script to run. Not sure i can > get to it today, though. > > > > > -- > Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) > To make changes to your subscription: > http://www.postgresql.org/mailpref/pgsql-hackers >
results-readonly-300-1000-SF.xlsx
Description: MS-Excel 2007 spreadsheet
results-readwrite-300-1000-SF.xlsx
Description: MS-Excel 2007 spreadsheet
-- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers