Last week I performed some performance tests on various types of
backends (FSFS formats 6 and 7, and FSX) served by 1.9, on machines
that are similar to our company's production svn setup. This is a
Solaris Sparc t5 with back-end on a Netapp cluster mounted via NFS;
client is a Windows client over a 100 Mbit LAN.

Disclaimer: this is not a scientific test. Most parts were not fully
isolated from production, with normal fluctuations in available
bandwidth (netapp cluster, network, ...). YMMV.


== Summary ==

FSFS f7 (packed) and FSFS f6 without dir-deltification (created by
1.8, unpacked) go head to head. Large log/blame-requests and export
are somewhat faster on f7, smaller log/blame/list-requests are
somewhat faster on f6_1.8. Disk storage: f7 is 40% smaller than f6_1.8
(probably because of dir-deltification). It seems that f7 is faster
when disk caches are cold (but this is hard to test / quantify (how
cold is "cold")). I think I'll go with f7 for the cold cache
performance on large requests, the smaller footprint (and better
admin-ability because of the packed form), and the faster performance
on 'export'.

Further:

* f6 with dir-deltification (the default when creating a
--compatible-version=1.8 repository with svnadmin 1.9) is a no-go:
it's 2 to 3 times slower than f7 and f6_1.8 for log and blame.

* The --block-read option makes f7 even faster for large requests
(record numbers for export, huge log and huge blame requests), but
also much slower for small requests. With block-size=8 the advantage
for large requests mostly remains, while small requests get a bit
faster (but still slower than f7 without block-read, or especially
f6_1.8). That leaves me with a dilemma: which use-case is more
important. I think I won't use block-read atm ... would be nice if
those regressions for small requests could be improved though
(ideally, the system would be able to guess whether block-read is
interesting for a particular high-level request).

* FSX is rather slow in most tests (specifically: it's fine when disks
are cold, but doesn't speed up much when disk caches become hot -- cpu
usage is permanently high though, so I think FSX is cpu bound in many
of my tests). Also: not included in the numbers below, but export with
FSX was significantly faster with --cache-revprops *disabled* (up to
around 120 seconds vs 165 seconds when disk caches were hot). In other
words: although I did run all FSX tests with cache-revprops enabled,
that seems to slow down export.

* Unpacked f7 and FSX are rather slow (didn't fully test them, it was
immediately obvious) -- these back-ends are best used in packed form.
OTOH, f6 gest best performance in unpacked form (pack manifest lookup
is overhead).


== Details & Numbers ==

Backend variations:
* f6_1.8
    repos created & loaded with  1.8 (no dir/prop-deltif)
    on-disk size: 10 GB
    server: svnserve -d --foreground -M128 -c0 -r repos_1.8
* f6_1.9
    repos created with 1.9-beta1 with --compatible-version 1.8
        & loaded with 1.9-beta1 (has dir/prop-deltif enabled by default)
    on-disk size: 6.1 GB
    server: svnserve -d --foreground -M128 -c0 -r repos_1.9_f6
* f7_pkd
    repos created & loaded with 1.9-beta1 + packed
    on-disk size: 5.8 GB
    server: svnserve -d --foreground -M128 -c0 -r repos_1.9_f7_packed
* f7_pkd_br
    same as above, but with block-read enabled
    server: svnserve -d --foreground -M128 -c0 --block-read yes -r
repos_1.9_f7_packed
* f7_pkd_br8
    same as above, but with block-read enabled, and block-size=8 (fsfs.conf)
    server: svnserve -d --foreground -M128 -c0 --block-read yes -r
repos_1.9_f7_packed
* fsx_pkd
    repos created with --fs-type fsx & loaded with 1.9-beta1 + packed
    on-disk size: 5.3 GB
    server: svnserve -d --foreground -M128 -c0 --cache-revprops yes -r
repos_1.9_fsx_packed

svnserve: 1.9.0-beta1 (provided by philip - Wandisco) on Solaris Sparc
client: svnbench 1.9.1 x64 (Sliksvn) on Windows 7
Network between client and server: 100 Mbit LAN

Hardware:
* Server: Solaris t5 sparc x64 (16 x sparcv9 3600 Mhz) 24 GB RAM
* Back-end: Netapp cluster (don't know the details), mounted via NFS
(The volume used here is relatively slow -- we have faster arrays for
 other systems, but this one is what we have for SVN right now)

Test Method:
* Per backend variation:
  * Per type of test (export, log subtree, blame-huge, list, log file,
blame-normal):
    * Start svnserve
    * Run test from client 4 times
    * Stop svnserve

Infrastructure observations:
* 'iostat' on the backend volume showed high busy-percentages for each
  first run, the three subsequent iterations consistently showed 0 % busy.
  First run is still interesting, showing "cold" results.
* 'top' showed low cpu usage for each first run (waiting for I/O), high cpu
  percentages for the three subsequent iterations.

Times in seconds. The tables below are best viewed with a fixed-width font.

null-export subtree (2,638 dirs; 21,713 files = 178 MB; 129,864 props = 2 MB)
===================
run \ repos | f6_1.8 | f6_1.9 | f7_pkd | f7_pkd_br | f7_pkd_br8 | fsx_pkd |
-----------------------------------------------------------------------------
1st (~cold) |   1053 |    147 |    374 |       162 |         73 |     227 |
2nd         |     81 |     71 |     67 |        85 |         56 |     165 |
3rd         |     80 |     81 |     65 |        86 |         56 |     167 |
4th         |     79 |     65 |     65 |        85 |         57 |     167 |

null-log -v subtree (62,458 revs; 98,900 msg lines; 299,171 changes)
===================
run \ repos | f6_1.8 | f6_1.9 | f7_pkd | f7_pkd_br | f7_pkd_br8 | fsx_pkd |
-----------------------------------------------------------------------------
1st (~cold) |   1558 |    802 |    611 |       176 |        152 |     342 |
2nd         |    291 |    865 |    252 |       126 |        136 |     211 |
3rd         |    272 |    742 |    229 |       130 |        134 |     213 |
4th         |    266 |    722 |    232 |       133 |        134 |     208 |

null-blame huge-deep-history (13,998 revs; 13,997 deltas; 33873 MB in deltas)
============================
run \ repos | f6_1.8 | f6_1.9 | f7_pkd | f7_pkd_br | f7_pkd_br8 | fsx_pkd |
-----------------------------------------------------------------------------
1st (~cold) |    531 |    968 |    351 |       254 |        381 |     308 |
2nd         |    342 |    585 |    340 |       230 |        293 |     293 |
3rd         |    332 |    555 |    349 |       232 |        241 |     298 |
4th         |    332 |    582 |    308 |       233 |        239 |     319 |

null-list -R subtree (2,639 dirs; 21,713 files; 0 locks)
====================
run \ repos | f6_1.8 | f6_1.9 | f7_pkd | f7_pkd_br | f7_pkd_br8 | fsx_pkd |
-----------------------------------------------------------------------------
1st (~cold) |    6.8 |   11.5 |    6.7 |      10.7 |        7.0 |    16.2 |
2nd         |    5.3 |    7.0 |    6.3 |      10.1 |        6.9 |    15.0 |
3rd         |    4.9 |    6.2 |    6.3 |       9.9 |        6.8 |    14.1 |
4th         |    5.0 |    6.4 |    6.3 |      10.2 |        6.8 |    15.2 |

null-log -v file (434 revs; 957 msg lines; 13,955 changes)
================
run \ repos | f6_1.8 | f6_1.9 | f7_pkd | f7_pkd_br | f7_pkd_br8 | fsx_pkd |
-----------------------------------------------------------------------------
1st (~cold) |   17.5 |   30.7 |   15.3 |      23.7 |        8.0 |    16.5 |
2nd         |    3.8 |   10.2 |    3.9 |      14.1 |        6.8 |    11.6 |
3rd         |    3.6 |    8.4 |    3.9 |      13.9 |        7.1 |    11.4 |
4th         |    3.5 |    8.6 |    3.9 |      13.7 |        6.8 |    11.3 |

null-blame normal (542 revs; 542 deltas; 22 MB in deltas)
=================
run \ repos | f6_1.8 | f6_1.9 | f7_pkd | f7_pkd_br | f7_pkd_br8 | fsx_pkd |
-----------------------------------------------------------------------------
1st (~cold) |    6.9 |   27.6 |   19.9 |      38.1 |       15.7 |    29.7 |
2nd         |    4.1 |   15.8 |    5.3 |      32.1 |       14.2 |    27.6 |
3rd         |    4.1 |   13.3 |    5.3 |      33.3 |       14.6 |    28.0 |
4th         |    4.1 |   13.3 |    5.5 |      32.6 |       14.5 |    28.0 |



-- 
Johan

Reply via email to