Jay Grogan wrote:
Ran 3 test using mkfile to create a 6GB on a ufs and ZFS file system.
command ran mkfile -v 6gb /ufs/tmpfile
Test 1 UFS mounted LUN (2m2.373s)
Test 2 UFS mounted LUN with directio option (5m31.802s)
Test 3 ZFS LUN (Single LUN in a pool) (3m13.126s)
Sunfire V120
1 Qlogic 2340
Solaris 10 06/06
Attached to Hitachi 9990 (USP) LUNS are Open L's at 33.9 GB, plenty of cache
on the HDS box disk are in a Raid5 .
New to ZFS so am I missing something the standard UFS write bested ZFS by a
minute. ZFS iostat showed about 50 MB a sec.
Hmm, something doesn't seem right. From my previous experiments back in
the day, ZFS was slightly faster than UFS:
http://blogs.sun.com/erickustarz/entry/fs_perf_102_filesystem_bw
And i re-ran this on 10/31 nevada non-debug bits:
ZFS:
# /bin/time sh -c 'lockfs -f .; mkfile 6g 6g.txt; lockfs -f .'
real 1:45.8
user 0.0
sys 16.5
#
UFS write cache disabled:
# /bin/time sh -c 'lockfs -f .; mkfile 6g 6g.txt; lockfs -f .'
real 1:57.4
user 0.9
sys 39.3
#
UFS write cache enabled:
# /bin/time sh -c 'lockfs -f .; mkfile 6g 6g.txt; lockfs -f .'
real 1:57.1
user 0.9
sys 39.4
#
The big difference of course being our hardware. I'm using a V210 (2
way sparc) with a single disk - no NVRAM.
So what is a "LUN" in your setup? and there's NVRAM in the HDS box?
What does your iostat output look like when comparing UFS vs. ZFS? I'm
wondering if we're hitting the problem where we send the wrong flush
write cache command down and we're actually flushing the NVRAM every
txg, when the storage should be smart enough to ignore the flush.
eric
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss