Philip Brown writes:
 > hi folks...
 > I've just been exposed to zfs directly, since I'm trying it out on
 > "a certain 48-drive box with 4 cpus" :-)
 > 
 > I read in the archives, the recent " hard drive write cache "
 > thread. in which someone at sun made the claim that zfs takes advantage of 
 > the disk write cache, selectively enabling it and disabling it.
 > 
 > However, that does not seem to be at all true, on the system I am testing 
 > on. (or if it doesnt, it isnt doing it in any kind of effective way)
 > 
 > 
 > SunOS test-t[xxxxxx](ahem) 5.11 snv_33 i86pc i386 i86pc
 > 
 > 
 > On the following RAIDZ pool:
 > 
 > # zpool status rzpool
 >    pool: rzpool
 >   state: ONLINE
 >   scrub: none requested
 > config:
 > 
 >          NAME         STATE     READ WRITE CKSUM
 >          rzpool       ONLINE       0     0     0
 >            raidz      ONLINE       0     0     0
 >              c0t4d0   ONLINE       0     0     0
 >              c0t5d0   ONLINE       0     0     0
 >              c1t4d0   ONLINE       0     0     0
 >              c1t5d0   ONLINE       0     0     0
 >              c5t4d0   ONLINE       0     0     0
 >              c5t5d0   ONLINE       0     0     0
 >              c9t4d0   ONLINE       0     0     0
 >              c9t5d0   ONLINE       0     0     0
 >              c10t4d0  ONLINE       0     0     0
 >              c10t5d0  ONLINE       0     0     0
 > 
 > 
 > Write performance for large files appears to top out at around 15-20MB/sec, 
 > according to zpool iostat
 > 
 > 
 > However, when I manually enable write cache on all the drives involved... 
 > performance for the pathalogical case of
 > 
 > dd if=/dev/zero of=/rzpool/testfile bs=128k
 > 
 > 
 > jumps to be 40-60MB/sec (with an initial spike to 80MB/sec. i was very 
 > disappointed to see that was not sustained ;-) ]


Yes it is; See  "Sequential writing is jumping"; should not
be too hard to fix though.

        http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6415647

 > 
 > This kind of performance differential also shows up with "real" load;
 > doing a tar| tar copy of large video files over NFS to the filesystem.
 > 
 > 
 > As a comparison, a single disk's dd write performance is around 6MB/sec no 
 > cache, and 30MB/sec with write cache enabled.
 > 
 > So the 40-50MB/sec result is kind of disappointing, with a **10** disk pool.
 > 

I Don't think RAID-Z is your problem in the above, but if the
performance of random read is important do check this:

        http://blogs.sun.com/roller/page/roch?entry=when_to_and_not_to


-r

 > 
 > 
 > Comments?
 > 
 > 
 > 
 > _______________________________________________
 > zfs-discuss mailing list
 > zfs-discuss@opensolaris.org
 > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to