Hey, Eric!

Now things get complicated. ;> I was naively hoping to avoid revealing our
exact pool configuration, fearing that it might lead to lots of tangential
discussion, but I can see how it may be useful so that you have the whole
picture. Time for the big reveal, then...

Here's the exact line used for the baseline test...

create volume data raidz1 c3t600144F0494719240000000000000000d0
c3t600144F0494719D40000000000000000d0 c3t600144F049471A5F0000000000000000d0
c3t600144F049471A6C0000000000000000d0 c3t600144F049471A820000000000000000d0
c3t600144F049471A8E0000000000000000d0

...the line for the 32GB SSD ZIL + 4x146GB SAS L2ARC test...

create volume data raidz1 c3t600144F0494719240000000000000000d0
c3t600144F0494719D40000000000000000d0 c3t600144F049471A5F0000000000000000d0
c3t600144F049471A6C0000000000000000d0 c3t600144F049471A820000000000000000d0
c3t600144F049471A8E0000000000000000d0 cache c1t2d0 c1t3d0 c1t5d0 c1t6d0 log
c1t4d0

...the line for the 32GB SSD ZIL + 80GB SSD L2ARC...

create volume data raidz1 c3t600144F0494719240000000000000000d0
c3t600144F0494719D40000000000000000d0 c3t600144F049471A5F0000000000000000d0
c3t600144F049471A6C0000000000000000d0 c3t600144F049471A820000000000000000d0
c3t600144F049471A8E0000000000000000d0 cache c1t7d0 c1t8d0 c1t9d0 c1t10d0 log
c1t4d0

Now I'm sure someone is asking, "What are those crazy
c3t600144F0494719240000000000000000d0, etc pool devices?". They are iSCSI
targets. Our X4240 is the head node for virtualizing and aggregating six
Thumpers-worth of storage. Each X4500 has its own raidz2 pool that is
exported via 10GbE iSCSI, the X4240 collects them all with raidz1, and the
resulting pool is about 140TB.

To head off a few questions that might lead us astray: We have compelling
NAS use-cases for this, it does work, and it is surprisingly fault-tolerant
(for example: while under heavy load, we can reboot an entire iSCSI node
without losing client connections, data, etc).

Using the X25-E for the L2ARC, but having no separate ZIL, sounds like a
worthwhile test. Is 32GB large enough for a good L2ARC, though?

Thanks!
-Gray

On Fri, Jan 16, 2009 at 1:16 AM, Eric D. Mudama
<edmud...@bounceswoosh.org>wrote:

> On Thu, Jan 15 at 15:36, Gray Carper wrote:
>
>>  Hey, all!
>>
>>  Using iozone (with the sequential read, sequential write, random read,
>> and
>>  random write categories), on a Sun X4240 system running OpenSolaris b104
>>  (NexentaStor 1.1.2, actually), we recently ran a number of relative
>>  performance tests using a few ZIL and L2ARC configurations (meant to try
>>  and uncover which configuration would be the best choice). I'd like to
>>  share the highlights with you all (without bogging you down with raw
>> data)
>>  to see if anything strikes you.
>>
>>  Our first (baseline) test used a ZFS pool which had a self-contained ZIL
>>  and L2ARC (i.e. not moved to other devices, the default configuration).
>>  Note that this system had both SSDs and SAS drive attached to the
>>  controller, but only the SAS drives were in use.
>>
>
> Can you please provide the exact config, in terms of how the zpool was
> built?
>
>   In the second test, we rebuilt the ZFS pool with the ZIL on a 32GB SSD
>> and
>>  the L2ARC on four 146GB SAS drives. Random reads were significantly worse
>>  than the baseline, but all other categories were slightly better.
>>
>
> In this case, ZIL on the X25-E makes sense for writes, but the SAS
> drives read slower than SSDs, so they're probably not the best L2ARC
> units unless you're using 7200RPM devices in your main zpool.
>
>   In the third test, we rebuilt the ZFS pool with the ZIL on a 32GB SSD and
>>  the L2ARC on four 80GB SSDs. Sequential reads were better than the
>>  baseline, but all other categories were worse.
>>
>
> I'm wondering if the single X25-E is not enough faster than the core
> pool, making a separate ZIL not worth it.
>
>   In the fourth test, we rebuilt the ZFS pool with no separate ZIL, but
>> with
>>  the L2ARC on four 146GB SAS drives. Random reads were significantly worse
>>  than the baseline and all other categories were about the same as the
>>  baseline.
>>
>>  As you can imagine, we were disappointed. None of those configurations
>>  resulted in any significant improvements, and all of the configurations
>>  resulted in at least one category being worse. This was very much not
>> what
>>  we expected.
>>
>
> Have you tried using the X25-E as a L2ARC, keep the ZIL default, and
> use the SAS drives as your core pool?
>
> Or were you using X25-M devices as your core pool before?  How much
> data is in the zpool?
>
>
> --
> Eric D. Mudama
> edmud...@mail.bounceswoosh.org
>
>


-- 
Gray Carper
MSIS Technical Services
University of Michigan Medical School
gcar...@umich.edu  |  skype:  graycarper  |  734.418.8506
http://www.umms.med.umich.edu/msis/
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to