On Fri, Jul 25, 2008 at 1:02 PM, Matt Wreede <[EMAIL PROTECTED]> wrote:
> Howdy.
>
> My plan:
>
> I'm planning an ESX-iSCSI target/NFS serving box.
>
> I'm planning on using an Areca RAID card, as I've heard mixed things about 
> hot-swapping with Solaris/ZFS, and I'd like the stability of a hardware RAID.
>
> My question is this: I'll be using 8 750GB SATA drives, and I''m trying to 
> figure out the best method to maintain:
> 1) Performance
> 2) Hot-swap-ability
> 3) Disk loss.
>
> My current plan is to build two RAID-5 arrays, 4 drives each, and mirror them 
> in ZFS and add them to the pool. This will give me 750GB*3, size wise, total.
>
> Now, here is the important question: Does mirroring provide a performance 
> boost, or is it simply a way to provide redundancy? That is, if I go ahead 
> and force-add the RAID-5 arrays, without mirroring them, I'll have 6 usable 
> drives; double the storage, but ZFS won't see any redundancy. But if a drive 
> fails, ZFS won't know or care, I'll simply go into the Areca control panel 
> and eject the drive; voila!
>
> But, is there a performance boost with mirroring the drives? That is what I'm 
> unsure of.
>
> Thanks for any information!
>

I know that if your mind is made up, in terms of using the Areca, then
this post is probably not going to change it, but I'd still like to
give you some food for thought.

If it were me, I would not add the Areca (which, BTW, is a fine piece
of hardware) because:

a) cost; or, put another way, those $s can be applied elsewhere with
more payback in terms of performance etc.  (more below)

b) You're mixing "software" (in the case of the Areca it's more
correctly called firmware) from 2 vendors - to provide a storage
solution where both vendors have fundamentally different approaches to
solving the same (storage) problem.

c) Now you've got to maintain and "patch" both vendors "software" stacks.

e) Fundamentally, ZFS is designed to talk *directly* to disk drives.

f) the current issues/deficiencies you point out with todays ZFS
implementation *will* vanish over time as ZFS is still under very
active development.  So you're "solving" a problem that will solve
itself in a relatively short timeframe.

g) the disk drives are tied to the hardware RAID controller - you
can't migrate the disks to another box without buying another
(compatible) RAID controller.  If your RAID controller dies you're
SOL.

h) your performance will be limited to the performance (today) of the
RAID hardware - rather than to the massive performance advantage you'd
gain by upgrading the system to a new motherboard/processor in a years
time (Nahelem for example).

I'll assume that you're going to spend $500 on the hardware RAID
controller (because I don't know which model/config you're thinking
of).  So, the question that I propose here (and attempt to answer) is:
"can those $500s be spent on a ZFS only solution to provide better
value"?

Proposal 1):   Buy an LSI based SAS controller board and a couple of
15k RPM SAS drives (you get to pick the size) and configure them as
ZFS log (slog) and cache devices.  Benefit: improved NFS performance.
Overall improved system performance.

Proposal 2): Buy as much RAM as possible.  ZFS loves RAM.  How about
16Gb or more.  Yep - that'll work!  :)

Proposal 3) Put the $500 in the stock market and wait for Sun to
release their "enterprise" RAM/Flash (or whatever it'll be) SSD.
This will provide a *huge* performance gain, especially for NFS.  And
this will be a simple "push in" type upgrade.[0]

Proposal 4) SAS solution similar to proposal 1 - but use the 15k SAS
disks to provide a ZFS mirrored pool with lots of IOPS.   Remember
there is *no* RAID storage configuration that is "right" for every
work load and my advice is always to configure multiple RAID configs
to support different workloads[1].  Also, your work load scenarios may
change over time, in ways that you did'nt foresee.

Proposal 5) Since you'll be providing iSCSI,  please do yourself a big
favor and install an enterprise level (multiple ports??) ethernet card
(Sun has one).  Otherwise the tens of thousands of interrupts/Sec
caused by iSCSI ops will *kill* your overall system performance.  The
reason why an enterprise card helps is because it'll coalesce those
interrupts and leave the system CPU cores free to do useful work.

[0] and you'll probably need to be a really good investor to be able
to afford it! :)

[1] on a 10-disk system here, there's a 5-disk raidz1 pool, a 2-disk
mirror and a 3-disk mirror.  If I were to do it again, I'd push for a
6-disk raidz2 pool in place of the raidz1 pool.

Regards,

-- 
Al Hopper Logical Approach Inc,Plano,TX [EMAIL PROTECTED]
 Voice: 972.379.2133 Timezone: US CDT
OpenSolaris Governing Board (OGB) Member - Apr 2005 to Mar 2007
http://www.opensolaris.org/os/community/ogb/ogb_2005-2007/
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to