Thanks for your observations.

HOWEVER, I didn't pose the question

"How do I architect the HA and storage and everything for an email system?"

Our site like many other data centers has HA standards and politics and all 
this other baggage that may lead a design to a certain point.  Thus our answer 
will be different than yours.  You can poke holes in my designs, I can poke 
holes in yours, this could go on all day.

Considering I am adding a new server to a group of existing servers of similar 
design, we are not going to make radical ground-up redesign decisions at this 
time.  I can fiddle around in the margins with things like stripe-size.

I will point out AS I HAVE BEFORE that ZFS is not yet completely 
enterprise-ready in our view.  For example in one commonly-proposed amateurish 
(IMO) scenario, we would have 2 big JBOD units and mirror the drives between 
arrays.  This works fine if a drive fails or even if an array goes down.  BUT, 
you are then left with a storage pool which must be immediately serviced or a 
single additional drive failure will destroy the pool.  Or simple drive failure 
which spare rolls in? The one from the same array, or one from the other?  
Seems a coin toss.  When it's a terabyte of email and 10K+ users that's a big 
deal for some people, and we did our HA design such that multiple failures can 
occur with no service impact.  The performance may not be ideal, and the design 
may not seem ELEGANT to everyone.  Mixing HW controller RAID and ZFS mirroring 
is admittedly an odd hybrid design.  Our answer works for us and that is all 
that matters.

So if someone has an idea of what stripe-size will work best for us that would 
be helpful.

Thanks!
 
 
This message posted from opensolaris.org
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to