> <div id="jive-html-wrapper-div">
> <br><br><div class="gmail_quote">On Feb 10, 2008 9:06
> AM, Jonathan Loran <<a
> href="mailto:[EMAIL PROTECTED]">[EMAIL PROTECTED]
> ley.edu</a>> wrote:<br><blockquote
> class="gmail_quote" style="border-left: 1px solid
> rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex;
> padding-left: 1ex;">
> 
> 
> 
>   
> div bgcolor="#ffffff" text="#000000">
> <br>
> <br>
> Richard Elling wrote:
> <blockquote type="cite"><div class="Ih2E3d">
>   <pre>Nick wrote:
> </pre>
>   <blockquote type="cite">
> <pre> Using the RAID cards capability for RAID6
>  sounds attractive?
>  
>    </pre>
> /blockquote>
> <pre>Assuming the card works well with Solaris,
>  this sounds like a
> easonable solution.
> 
>   </pre></div>
> blockquote>
> Careful here.  If your workload is
> unpredictable, RAID 6 (and RAID 5)
> for that matter will break down under highly
> randomized write loads. 
> There's a lot of trickery done with hardware RAID
> cards that can do
> some read-ahead caching magic, improving the
> read-paritycalc-paritycalc-write cycle, but you
> can't beat out the laws
> of physics.  If you do *know* you'll be
> streaming more than writing
> random small number of blocks, RAID 6 hardware can
> work.  But with
> transaction like loads, performance will suck. 
> <br>
> <br>
> Jon<br>
> </div></blockquote><div><br>I would like to echo
> Jon's sentiments and
> add the following:  If you are going to have a
> mix of workload types or
> if your IO pattern is unknown, then I would suggest
> that you configure
> the array as a JBOD and use raidz.  Raid 5 or
> Raid 6 works best for
> predictable IOs with well controlled IO unit
> sizes.<br>
> <br>How you lay it out depends on whether you need
> (or want) hot
> spares.  What are your objectives here? 
> Maximum throughput, lowest
> latencies, maximum space, best redundancy,
> serviceability/portability,
> or .... ?<br><br>
> Cheers,<br><font color="#888888"> 
> _J<br><br></font></div></div>

The top priority would be to provide some redundancy, the ability to cope with 
upto 2 disk failures out of the 12-disk array is very attractive. Next-up I 
would say performance is important. I will have no control over how many 
virtual (or physical) machines access their storage through the device, 
although I would characterise any single VM as undemanding. Certainly nothing 
transactional, and database access will be light. I am expecting MS-Exchange 
and general CIFS shares to the desktop to be the most greedy consumer.

I am hoping that having 12 spindles, and 15Krpm SAS drives, will be a good base 
to build upon, performance wise.

As this system will be handed over to non-solaris, mostly windowsy types to 
use, I will be investing a lot of time in trying to make the system as "set and 
forget" as possible, and am prepared to take some compromises in order to 
achieve that :-)


Nick
 
 
This message posted from opensolaris.org
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to