I wouldn't expect any improvement using a separate disk slice for the Intent Log
unless that disk was much faster and was otherwise largely idle. If it was 
heavily
used then I'd expect quite the performance degradation as the disk head bounces
around between slices. Separate intent logs are really recommended for fast 
devices
(SSDs or NVRAM).

When you're comparing against UFS is the write cache disabled (use format -e)?
Otherwise UFS is unsafe. 

To get a apples to apples perf comparison, you can compare either:

Safe mode
---------
ZFS with default settings (zil_disable=0 & zfs_nocacheflush=0)
against UFS with write cache disabled. Ie the safe mode.

Unsafe mode - unless device is volatile.
---------------------------------------
ZFS with zil_disable=0 & zfs_nocacheflush=1
against UFS with write cache enabled.

>From my reading of one your comparisons, ZFS takes 10s vs 15s for UFS
(unsafe mode)

Neil.

On 11/13/08 16:23, Doug wrote:
> I've got an X4500/thumper that is mainly used as an NFS server.
> 
> It has been discussed in the past that NFS performance with ZFS can be slow 
> (when running "tar" to expand an archive with lots of files, for example.)  
> My understanding is the reason that zfs/nfs is slow in this case is because 
> it is doing the "correct/safe" thing of waiting for the files to be written 
> to disk.
> 
> I can (and have) improved nfs/zfs performance by about 15x by adding "set 
> zfs:zil_disable=1" or "zfs:zfs_nocacheflush=1" to /etc/system but this is 
> unsafe (though a common workaround?)
> 
> But, I have never understood why zfs/nfs is so much slower than ufs/nfs in 
> the case of expanding a tar archive.  Is ufs/nfs not properly committing the 
> data to disk?
> 
> Anyway, with the just released Solaris 10 10/08, zpool has been upgraded to 
> version 10 which includes option of using a separate storage device for the 
> ZIL.
> It had been my impression that you would need to use an flash disk/SSD to 
> store the ZIL to improve performance, but Richard Elling mentioned in a 
> earlier post that you could use a regular disk slice for this also (see 
> http://www.opensolaris.org/jive/thread.jspa?threadID=80213&tstart=15)
> 
> On an X4500 server, I had a zpool of 8 disks arranged in RAID 10.  I 
> installed a flash archive of s10u6 on the server then ran "zpool upgrade".  
> Next, I used 
> "zpool add log" to add a 50GB slice on the boot disk for the zfs intent log.  
> 
> But, I didn't see any improvement in NFS performance in running "gtar zxf 
> Python-2.5.2.tgz" (Python language source code)  It took 0.6sec to run on the 
> local system (no NFS) and 2min20sec over NFS.  If I disable the ZIL, the 
> command runs in about 10sec on the NFS client.  (It runs in about 15 seconds 
> over NFS to a UFS slice on the NFS server.)  The separate intent log didn't 
> seem to do anything in this case.
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to