[EMAIL PROTECTED] said:
> The reality is that
>       ZFS turns on the write cache when it owns the
>       whole disk.
> _Independantly_ of that,
>       ZFS flushes the write cache when ZFS needs to insure 
>       that data reaches stable storage.
> 
> The point is that the flushes occur whether or not ZFS turned the caches on
> or not (caches might be turned on by some other means outside the visibility
> of ZFS). 

Thanks for taking the time to clear this up for us (assuming others than
just me had this misunderstanding :-).

Yet today I measured something that leaves me puzzled again.  How can we
explain the following results?

# zpool status -v
  pool: bulk_zp1
 state: ONLINE
 scrub: none requested
config:

        NAME                                                 STATE     READ 
WRITE CKSUM
        bulk_zp1                                             ONLINE       0    
 0     0
          raidz1                                             ONLINE       0    
 0     0
            c6t4849544143484920443630303133323230303230d0s0  ONLINE       0    
 0     0
            c6t4849544143484920443630303133323230303230d0s1  ONLINE       0    
 0     0
            c6t4849544143484920443630303133323230303230d0s2  ONLINE       0    
 0     0
            c6t4849544143484920443630303133323230303230d0s3  ONLINE       0    
 0     0
            c6t4849544143484920443630303133323230303230d0s4  ONLINE       0    
 0     0
            c6t4849544143484920443630303133323230303230d0s5  ONLINE       0    
 0     0
            c6t4849544143484920443630303133323230303230d0s6  ONLINE       0    

0     0

errors: No known data errors
# prtvtoc -s /dev/rdsk/c6t4849544143484920443630303133323230303230d0
*                          First     Sector    Last
* Partition  Tag  Flags    Sector     Count    Sector  Mount Directory
       0      4    00         34 613563821 613563854
       1      4    00  613563855 613563821 1227127675
       2      4    00  1227127676 613563821 1840691496
       3      4    00  1840691497 613563821 2454255317
       4      4    00  2454255318 613563821 3067819138
       5      4    00  3067819139 613563821 3681382959
       6      4    00  3681382960 613563821 4294946780
       8     11    00  4294946783     16384 4294963166
# 

And, at a later time:
# zpool status -v bulk_sp1s
  pool: bulk_sp1s
 state: ONLINE
 scrub: none requested
config:

        NAME                                               STATE     READ 
WRITE CKSUM
        bulk_sp1s                                          ONLINE       0     
0     0
          c6t4849544143484920443630303133323230303230d0s0  ONLINE       0     
0     0
          c6t4849544143484920443630303133323230303230d0s1  ONLINE       0     
0     0
          c6t4849544143484920443630303133323230303230d0s2  ONLINE       0     
0     0
          c6t4849544143484920443630303133323230303230d0s3  ONLINE       0     
0     0
          c6t4849544143484920443630303133323230303230d0s4  ONLINE       0     
0     0
          c6t4849544143484920443630303133323230303230d0s5  ONLINE       0     
0     0
          c6t4849544143484920443630303133323230303230d0s6  ONLINE       0     
0     0

errors: No known data errors
# 


The storage is that same "single 2TB LUN" I used yesterday, except I've
used "format" to slice it up into 7 equal chunks, and made a raidz
(and later a simple striped) pool across all of them.  My "tar over NFS"
benchmark on these goes pretty fast.  If ZFS is making the flush-cache call,
it sure works faster than in the whole-LUN case:

ZFS on whole-disk FC-SATA LUN via NFS, yesterday:
    real 968.13
    user 0.33
    sys 0.04
      7.9 KB/sec overall

ZFS on whole-disk FC-SATA LUN via NFS, ssd_max_throttle=32 today:
    real 664.78
    user 0.33
    sys 0.04
      11.4 KB/sec overall

ZFS raidz on 7 slices of FC-SATA LUN via NFS today:
    real 12.32
    user 0.32
    sys 0.03
      620.2 KB/sec overall

ZFS striped on 7 slices of FC-SATA LUN via NFS today:
    real 6.51
    user 0.32
    sys 0.03
      1178.3 KB/sec overall

Not that I'm not complaining, mind you.  I appear to have stumbled across
a way to get NFS over ZFS to work at a reasonable speed, without making
changes to the array (nor resorting to giving ZFS SVN soft partitions
instead of "real" devices).  Suboptimal, mind you, but it's workable
if our Hitachi folks don't turn up a way to tweak the array.

Guess I should go read the ZFS source code (though my 10U3 surely lags
the Opensolaris stuff).

Thanks and regards,

Marion


_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to