I read through the entire thread, I think, and have some comments.

   * There are still some "granny smith" to "Macintosh" comparisons
     going on. Different OS revs, it looks like different server types,
     and I can't tell about the HBAs, links or the LUNs being tested.
   * Before you test with filebench or ZFS perform a baseline on the
     LUN(s) itself with a block workload generator. This should tell
     the raw performance of the device of which ZFS should be some
     percentage smaller. Make sure you use lots of threads.
   * Testing ...
         o I'd start with configuring the 3510RAID for a sequential
           workload, one large R0 raid pool across all the drives
           exported as one LUN, ZFS block size at default and testing
           from there. This should line the ZFS blocksize and cache
           blocksize up more then the random setting.
         o If you want to get interesting try slicing12 LUNs from the
           single R0 raid pool in the 3510, export those to the host,
           and stripe ZFS across them. (I have a feeling it will be
           faster but thats just a hunch)
         o If you want to get really interesting export each drive as a
           single R0 LUN and stripe ZFS across the 12 LUNs (Which I
           think you can do but don't remember ever testing because,
           well, it would be silly but could show some interesting
           behaviors.)
   * Some of the results appear to show limitations in something
     besides the underlying storage but it's hard to tell. Our internal
     tools - Which I'm dying to get out in the public - also capture
     cpu load and some other stats to note bottlenecks that might come
     up during testing.


That said this is all great stuff. Keep kicking the tires.
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to