> Therefore, it sounds like I should be strongly leaning 
> towards not using the hardware raid in external disk arrays 
> and use them like a JBOD.
> 

Another thing to consider is the transparency that Solaris or a
general-purpose operating system gives for the purpose of
troubleshooting.  For example, there's no way to run Dtrace on the ASIC
that's doing your hardware RAID, to show you exactly where your
bottleneck is (even per-disk iostat isn't available in most cases)  How
would you determine that your application's read stride size is causing
one of the component disks to be a hot spot?

Also, the more information the OS knows about the layout of the disk,
the better the I/O scheduler can reorder operations to optimize seeks.

These were crucial points when we were thinking about our next "big disk
array" purchase.  It's for a disk-to-disk backup server (very large
sequential reads and writes issued by a mostly-idle CPU) so we're mostly
limited by actual spindle throughput and (since it's on cheaper/slower
disks) seek time.  

--Joe
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to