I just did another test, this time using a linux NFS client against B38 with UFS and iscsi disks. It was close to the same speed (over 8MB/sec average) as going to UFS on local disk or ZFS on local disk (around 20MB/sec). My UFS formated iscsi disk was only a single iscsi disk and not like the RAIDZ array I have for ZFS on iscsi. Either way, it looks like all methods of writing (local or NFS) to UFS or ZFS are plenty fast. It took 1minute 7 seconds to write to over NFS to UFS, and 2minutes 15 seconds to write over NFS to UFS on iscsi. With ZFS over iscsi, the same 3G directory sample took 50+ minutes, at low k's/sec.

Here's the NFS->UFS->iSCSI dtrace latency times for comparison:

dtrace: script './nfs.dtrace' matched 3 probes
^C
CPU FUNCTION
  0 | :END

NFS3 op counts
==============
RFS3_FSSTAT                       4
RFS3_SYMLINK                      5
RFS3_MKDIR                       73
RFS3_COMMIT                     886
RFS3_CREATE                     901
RFS3_RENAME                     901
RFS3_ACCESS                    1881
RFS3_SETATTR                   3792
RFS3_LOOKUP                    3882
RFS3_GETATTR                   7579
RFS3_WRITE                    46844


NFS3 op avg response time (usec)
================================
RFS3_GETATTR                      7
RFS3_ACCESS                       8
RFS3_LOOKUP                      14
RFS3_FSSTAT                     135
RFS3_RENAME                     884
RFS3_SETATTR                   2172
RFS3_CREATE                    6410
RFS3_MKDIR                    15766
RFS3_SYMLINK                  22340
RFS3_WRITE                    30577
RFS3_COMMIT                   57034


NFS3 op avg system time (usec)
==============================
RFS3_ACCESS                       6
RFS3_GETATTR                      6
RFS3_LOOKUP                       8
RFS3_FSSTAT                      23
RFS3_SETATTR                     27
RFS3_CREATE                      66
RFS3_RENAME                      75
RFS3_SYMLINK                     90
RFS3_MKDIR                      120
RFS3_WRITE                      188
RFS3_COMMIT                     306


On 5/5/06, Joe Little <[EMAIL PROTECTED]> wrote:
well, it was already an NFS-discuss list message. Someone else added dtrace-discuss to it. I have already noted this to a degree on zfs-discuss, but it seems to be mainly a NFS specific issue at this stage.



On 5/5/06, Spencer Shepler <[EMAIL PROTECTED] > wrote:
On Fri, Joe Little wrote:
> On 5/5/06, Spencer Shepler <[EMAIL PROTECTED]> wrote:
> >On Fri, Joe Little wrote:
> >> Well, I used the dtrace script used here. The NFS implementation
> >> (server) is Solaris 11 B38, and the client and the RHEL linux
> >> revision, which doesn't have this problem going through other
> >> SAN-based NAS (NetApp, EMC, etc.. even iSCSI). I previously setup a
> >> Linux box as an iscsi initiator, XFS, and Linux's less than stellar
> >> kNFS server revision, and did not see this interaction. Thus, if there
> >> are any thread issues, its likely on Solaris' end or there is
> >> particularly bad interaction with linux clients if and only if the
> >> solaris backend is iSCSI. That latter doesn't make sense.
> >
> >It is a server response time issue as you have demonstrated with data.
> >The server in the NFS/ZFS/iSCSI path is not responding as quickly
> >as other combinations and for this particular application, the overall
> >throughput is subpar.
> >
> >Focusing on the disparity found to understand why the NFS/ZFS/iSCSI
> >combo is not working well seems like the correct path.
> >
>
> That's where I'm at a loss. Has the NFS/ZFS/iSCSI path been tested by
> Sun at all?

I don't know and this seems like a good point to move the
discussion to zfs-discuss and nfs-discuss to see if there
is additional input.

Spencer


_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to