On Fri, Joe Little wrote:
> On 5/5/06, Eric Schrock <[EMAIL PROTECTED]> wrote:
> >On Fri, May 05, 2006 at 03:46:08PM -0700, Joe Little wrote:
> >> Thanks for the tip. In the local case, I could send to the
> >> iSCSI-backed ZFS RAIDZ at even faster rates, with a total elapsed time
> >> of 50seconds (17 seconds better than UFS). However, I didn't even both
> >> finishing the NFS client test, since it was taking a few seconds
> >> between multiple 27K files. So, it didn't help NFS at all. I'm
> >> wondering if there is something on the NFS end that needs changing,
> >> no?
> >
> >Keep in mind that turning off this flag may corrupt on-disk state in the
> >event of power loss, etc.  What was the delta in the local case?  17
> >seconds better than UFS, but percentage wise how much faster than the
> >original?
> >
> 
> I believe it was only about 5-10% faster. I don't have the time
> results off hand, just some dtrace latency reports.
> 
> >NFS has the property that it does an enormous amount of synchronous
> >activity, which can tickle interesting pathologies.  But it's strange
> >that it didn't help NFS that much.
> 
> Should I also mount via async.. would this be honored on the Solaris
> end? The other option mentioned with similar caveats was nocto. I just
> tried with both, and the observed transfer rate was about 1.4k/s. It
> was painful deleting the 3G directory via NFS, with about 100k/s
> deletion rate on these 1000 files. Of course, When I went locally the
> delete was instantaneous.

I wouldn't change any of the options at the client.  The issue
is at the server side and none of the other combinations that you
originally pointed out have this problem, right?  Mount options at the
client will just muddy the waters.

We need to understand if/what the NFS/ZFS/iscsi interaction is and why
it is so much worse.  As Eric mentioned, there may be some interesting
pathologies at play here and we need to understand what they are so
they can be addressed.

My suggestion is additional dtrace data collection but I don't have 
a specific suggestion as to how/what to track next.  
Because of the significant additional latency, I would be looking for
big increases in the number of I/Os being generated to the iscsi backend
as compared to the local attached case.  I would also look for 
some type of serialization of I/Os that is occurring with iscsi vs.
the local attach.

Spencer
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to