Hello Torrey,

Wednesday, August 9, 2006, 5:39:54 AM, you wrote:

TM> I read through the entire thread, I think, and have some comments.

TM>     * There are still some "granny smith" to "Macintosh" comparisons
TM>       going on. Different OS revs, it looks like different server types,
TM>       and I can't tell about the HBAs, links or the LUNs being tested.

Hmmmm.... in a first test that's true I did use diffferent OS
revisions, but then I corrected it and the same tests were performed
on both OS'es. The server hardware are identical on both servers -
v440, 4x1,5GHz, 8GB RAM, dual-ported 2Gb FC card based on Qlogic
(1077,2312).

I also included snv_44 and S10 06/06 to see if there're real
differences in ZFS performance in those tests.

I know I haven't included all the details - some are more or less
obvious some not.

TM>     * Before you test with filebench or ZFS perform a baseline on the
TM>       LUN(s) itself with a block workload generator. This should tell
TM>       the raw performance of the device of which ZFS should be some
TM>       percentage smaller. Make sure you use lots of threads.

Well, that's why I compared it to UFS. Ok, no SVM+UFS testing, but
anyway. I wanted some kind of quick answer to a really simple question
a lot of people are going to ask themselves (me included) - in case of
3510s like arrays is it better to use HW RAID with UFS? Or maybe HW
RAID with ZFS? Or maybe it's actually better to uses only 3510s JBODs
with ZFS? There are many factors and one of them is performance.
As I want to use it as NFS server filebench/varmail is good enough
approximation.

And I've got an answer - ZFS should be faster right now than UFS
regardles if I will use them on HW RAID or in case of ZFS make use of
software RAID.

TM>     * Testing ...
TM>           o I'd start with configuring the 3510RAID for a sequential
TM>             workload, one large R0 raid pool across all the drives
TM>             exported as one LUN, ZFS block size at default and testing
TM>             from there. This should line the ZFS blocksize and cache
TM>             blocksize up more then the random setting.
TM>           o If you want to get interesting try slicing12 LUNs from the
TM>             single R0 raid pool in the 3510, export those to the host,
TM>             and stripe ZFS across them. (I have a feeling it will be
TM>             faster but thats just a hunch)
TM>           o If you want to get really interesting export each drive as a
TM>             single R0 LUN and stripe ZFS across the 12 LUNs (Which I
TM>             think you can do but don't remember ever testing because,
TM>             well, it would be silly but could show some interesting
TM>             behaviors.)

I know - there are more scenarios also interesting.
I would love to test them and do it in more detail with different
workloads, etc. if I only had a time.

TM>     * Some of the results appear to show limitations in something
TM>       besides the underlying storage but it's hard to tell. Our internal
TM>       tools - Which I'm dying to get out in the public - also capture
TM>       cpu load and some other stats to note bottlenecks that might come
TM>       up during testing.

It looks like so.

I would like to test also bigger configs - like 2-3 additional JBODs,
more HW RAID groups and generate workload concurrently on many
file systems. Then try to do it in ZFS in a one pool and in a separate
pools and see how it behaves. I'll see about it.

TM> That said this is all great stuff. Keep kicking the tires.

:)

-- 
Best regards,
 Robert                            mailto:[EMAIL PROTECTED]
                                       http://milek.blogspot.com

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to