On Thu, 2007-08-30 at 12:07 -0700, eric kustarz wrote:
> Hey jwb,
> 
> Thanks for taking up the task, its benchmarking so i've got some  
> questions...
> 
> What does it mean to have an external vs. internal journal for ZFS?

This is my first use of ZFS, so be gentle.  External == ZIL on a
separate device, e.g.

zpool create tank c2t0d0 log c2t1d0

> Can you show the output of 'zpool status' when using software RAID  
> vs. hardware RAID for ZFS?

I blew away the hardware RAID but here's the one for software:

# zpool status
  pool: tank
 state: ONLINE
 scrub: none requested
config:

        NAME        STATE     READ WRITE CKSUM
        tank        ONLINE       0     0     0
          mirror    ONLINE       0     0     0
            c2t0d0  ONLINE       0     0     0
            c2t1d0  ONLINE       0     0     0
          mirror    ONLINE       0     0     0
            c2t2d0  ONLINE       0     0     0
            c2t3d0  ONLINE       0     0     0
          mirror    ONLINE       0     0     0
            c2t4d0  ONLINE       0     0     0
            c2t5d0  ONLINE       0     0     0
        logs        ONLINE       0     0     0
          c2t6d0    ONLINE       0     0     0

errors: No known data errors

iostat shows balanced reads and writes across t[0-5], so I assume this
is working.

> The hardware RAID has a cache on the controller.  ZFS will flush the  
> "cache" when pushing out a txg (essentially before writing out the  
> uberblock and after writing out the uberblock).  When you have a non- 
> volatile cache with battery backing (such as your setup), its safe to  
> disable that via putting 'set zfs:zfs_nocacheflush = 1' in /etc/ 
> system and rebooting. 

Do you think this would matter?  There's no reason to believe that the
RAID controller respects the flush commands, is there?  As far as the
operating system is concerned, the flush means that data is in
non-volatile storage, and the RAID controller's cache/disk configuration
is opaque.

> What parameters did you give bonnie++?  compiled 64bit, right?

Uh, whoops.  As I freely admit this is my first encounter with
opensolaris, I just built the software on the assumption that it would
be 64-bit by default.  But it looks like all my benchmarks were built
32-bit.  Yow.  I'd better redo them with -m64, eh?

[time passes]

Well, results are _substantially_ worse with bonnie++ recompiled at
64-bit.  Way, way worse.  54MB/s linear reads, 23MB/s linear writes,
33MB/s mixed.

> For the randomio test, it looks like you used an io_size of 4KB.  Are  
> those aligned?  random?  How big is the '/dev/sdb' file?

Randomio does aligned reads and writes.  I'm not sure what you mean
by /dev/sdb?  The file upon which randomio operates is 4GiB.

> Do you have the parameters given to FFSB?

The parameters are linked on my page.

Regards,
jwb

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to