Hello,
I switched my home server from Debian to Solaris. The main cause for
this step was stability and ZFS.
But now after the migration (why isn't it possible to mount a linux
fs on Solaris???) I make a few benchmarks
and now I thought about swtching back to Debian. First of all the
hardware layout of my home server:
Mainboard: Asus A7V8X-X
CPU: AthlonXP 2400+
Memory: 1.5GB
Harddisks: 1x160GB (IDE, c0d1), 2x250GB (IDE, c1d0 + c1d1), 4x250GB
(SATA-1, c2d0,c2d1,c3d0,c3d1)
SATA Controller: SIL3114 (downgraded to the IDE-FW)
Solaris nv_54
==
First of all I test the disk performance with dd: "dd bs=1M count=50
if=/dev/dsk/cxdxs1 of=/dev/null"
c0d1 => 67.8 MB/s
c1d0 => 63.0 MB/s
c1d1 => 47.4 MB/s
c2d0 => 54.5 MB/s
c2d1 => 57.5 MB/s
c3d0 => 54.5 MB/s
c3d1 => 56.5 MB/s
everything looks ok, c1d1 seems 10 MB/s slower than the rest but
should be ok cause it's an older disk.
==
Than I compiled the newest Version of bonnie++ and do some benchmarks
first on an ZFS Mirror (/data/) created with
the 250GB IDE disk:
$ ./bonnie++ -d /data/ -s 4G -u root
Using uid:0, gid:0.
Version 1.03 ------Sequential Output------ --Sequential Input-
--Random-
-Per Chr- --Block-- -Rewrite- -Per Chr- --
Block-- --Seeks--
Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %
CP /sec %CP
4G 17832 25 17013 33 4630 12 21778 38
26839 11 66.0 2
Now on the ZFS RaidZ (/srv) single parity with the four sata discs:
$ ./bonnie++ -d /srv/ -s 4G -u root
Using uid:0, gid:0.
Version 1.03 ------Sequential Output------ --Sequential Input-
--Random-
-Per Chr- --Block-- -Rewrite- -Per Chr- --
Block-- --Seeks--
Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %
CP /sec %CP
4G 21292 32 24171 53 9672 25 32420 55
53268 29 87.7 3
To have an reference now a bonnie++ benchmark on the single 160GB IDE
disk:
$ ./bonnie++ -d /export/home/ -s 4G -u root
Using uid:0, gid:0.
Version 1.03 ------Sequential Output------ --Sequential Input-
--Random-
-Per Chr- --Block-- -Rewrite- -Per Chr- --
Block-- --Seeks--
Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %
CP /sec %CP
4G 34727 58 34574 25 12193 12 37826
59 41262 17 161.3 1
==
The last test I do was a rsync of an 290MB files between the
different ZFS Pools and Filesystems:
ZFS Mirror => single disc 7,074,316.87 bytes/sec
ZFS Mirror => ZFS RaidZ 5,953,633.01 bytes/sec
ZFS Mirror => ZFS Mirror 3,982,231.35 bytes/sec
ZFS RaidZ => ZFS Mirror 10,549,419.89 bytes/sec
ZFS RaidZ => single disc 16,251,809.03 bytes/sec
ZFS RaidZ => ZFS RaidZ 8,714,738.17 bytes/sec
single disc => ZFS RaidZ 18,221,725.27 bytes/sec
single disc => ZFS Mirror 24,052,677.36 bytes/sec
single disc => single disc 31,648,259.68 bytes/sec
==
Conclusion: In my oppinion it's a realy bad performance except the
single disk performance ;-)
Is there a switch in ZFS where I could switch between lousy
performance and really fast?
zil_disable looks not like an opportunity.
What' s the best way to monitor the CPU load during the benchmarks, I
don't believe that the
problem has something to do with CPU Power but it's one point to check.
regards,
Sascha
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss