Unfortunately I've started restoring data onto my array (2.5TB @ the 20ish 
MB/sec my LTO2 drive maxes out at will take a while ;) ) so I can't do any more 
testing that involves destroying the zpool and/or individual devices...

So all the numbers below are to a 16-disk raidz2 zpool (unless otherwise noted).

> > If I use raidz, no (overall throughput is actually
> nearly halved !).  If I use "RAID0" (just striped
> disks, no redundancy) it improves (significantly in
> some cases).
> > 
> 
> Increasing the blocksize will help.  You can do that
> on bonnie++ like
> this:
> 
> ./bonnie++ -d /internal/ -s 8g:128k ...
> 
> Make sure you don't have compression on....

Didn't seem to make any (significant) difference:

Single run:
Version  1.03       ------Sequential Output------ --Sequential Input- --Random-
                    -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine   Size:chnk K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
nitrogen    8G:128k           78974  10 46895  16           136737  30  89.1   4
                    ------Sequential Create------ --------Random Create--------
                    -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
              files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
                 16 12150  94 +++++ +++ 16762  97 11616  87 +++++ +++ 23418  99
nitrogen,8G:128k,,,78974,10,46895,16,,,136737,30,89.1,4,16,12150,94,+++++,+++,16762,97,11616,87,+++++,+++,23418,99

Two simultaneous runs:
Version  1.03       ------Sequential Output------ --Sequential Input- --Random-
                    -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine   Size:chnk K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
nitrogen    8G:128k           34556   9 13084  20           56964  21  44.0   2
                    ------Sequential Create------ --------Random Create--------
                    -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
              files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
                 16  9570  74 +++++ +++ 11259  85  7129  90 19005  95   605  74
nitrogen,8G:128k,,,34556,9,13084,20,,,56964,21,44.0,2,16,9570,74,+++++,+++,11259,85,7129,90,19005,95,605,74

Version  1.03       ------Sequential Output------ --Sequential Input- --Random-
                    -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine   Size:chnk K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
nitrogen    8G:128k           33355   8 12314  19           66295  21  47.1   2
                    ------Sequential Create------ --------Random Create--------
                    -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
              files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
                 16  3365  48 +++++ +++  6118  74  1921  77 +++++ +++  1601  78
nitrogen,8G:128k,,,33355,8,12314,19,,,66295,21,47.1,2,16,3365,48,+++++,+++,6118,74,1921,77,+++++,+++,1601,78

> > Some observations:
> > * This machine only has 32 bit CPUs.  Could that be
> limiting performance ?
> 
> It will, but it shouldn't be too awful here.  You can
> lower kernelbase
> to let the kernel have more of the RAM on the
> machine.  You're more 
> likely going to run into problems w/ the front side
> bus; my experience
> w/ older Xeons is that one CPU could easily saturate
> the FSB; using the
> other would just make things worse.  You should not
> be running into that
> yet, either, though.  Offline one of the CPUs w/
> psradm -f 1; reenable 
> w/ psradm -n 1.

I tried this and it didn't seem to make a difference.

According to vmstat, during the write phase of the benchmark, cpu idle% was 
getting down around the 0 - 15% range and cpu sys% was 75 - 90%.  With two CPUs 
active idle% and sys% were around 50% each.

> > * A single drive will hit ~60MB/s read and write.
> Since these are only 7200rpm SATA disks, that's
>  probably all they've got to give.
> n a good day on the right part of the drive...
> slowest to fastest 
> sectors can be 2:1 in performance...
> 
> What can you get with your drives w/ dd to the raw
> device when not part 
> of a pool?
> 
> Eg /bin/ptime dd if=/dev/zero of=/dev/dsk/... bs=128k
> bc=20000

As I said I can't write to the raw devices any more, but I tried this with 
reads (of=/dev/null) and the dd processes finished between 3:45 (quickest) and 
4:15 (longest).  So if my maths is right that's 2.5GB written per process * 16 
processes = 40GB in 4:15 = about 155MB/sec (eyeballing iostat -x output during 
the run gave about the same value).

Interesting that this is _substantially_ less than the 400MB/sec a 16-disk 
RAID0 managed to achieve (albeit about on par with my 16-disk raidz2)...

Added to that, watching the disk activity with iostat -x indicated the 
individual drive throughputs to vary frequently and significantly throughout 
the test.  One second a drive would be hitting 50MB/sec, the next it would be 
down around 7.5MB/sec, the next 25MB/sec, etc.  All the disks exhibited this 
behaviour.


I'm impressed the x4500s can hit that sort of speed - clearly then my 
performance is being limited by the hardware and not the software ;).

Are there any tuning knobs I can play with for ZFS and/or the SATA controllers 
and/or the disks ?

Cheers,
CS
 
 
This message posted from opensolaris.org
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to