...

> For modern disks, media bandwidths are now getting to
> be > 100 MBytes/s.
> If you need 500 MBytes/s of sequential read, you'll
> never get it from 
> one disk.

And no one here even came remotely close to suggesting that you should try to.

> You can get it from multiple disks, so the questions
> are:
> 1. How to avoid other bottlenecks, such as a
>  shared fibre channel 
> ath?  Diversity.
> 2. How to predict the data layout such that you
> can guarantee a wide 
> spread?

You've missed at least one more significant question:

3.  How to lay out the data such that this 500 MB/s drain doesn't cripple 
*other* concurrent activity going on in the system (that's what increasing the 
amount laid down on each drive to around 1 MB accomplishes - otherwise, you can 
easily wind up using all the system's disk resources to satisfy that one 
application, or even fall short if you have fewer than 50 disks available, 
since if you spread the data out relatively randomly in 128 KB chunks on a 
system with disks reasonably well-filled with data you'll only be obtaining 
around 10 MB/s from each disk, whereas with 1 MB chunks similarly spread about 
each disk can contribute more like 35 MB/s and you'll need only 14 - 15 disks 
to meet your requirement).

Use smaller ZFS block sizes and/or RAID-Z and things get rapidly worse.

- bill
 
 
This message posted from opensolaris.org
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to