On Wed, April 14, 2010 12:29, Bob Friesenhahn wrote:
> On Wed, 14 Apr 2010, David Dyer-Bennet wrote:
>>>>
>>>> Not necessarily for a home server.  While mine so far is all mirrored
>>>> pairs of 400GB disks, I don't even think about "performance" issues, I
>>>> never come anywhere near the limits of the hardware.
>>>
>>> I don't see how the location of the server has any bearing on required
>>> performance.  If these 2TB drives are the new 4K sector variety, even
>>> you might notice.
>>
>> The location does not, directly, of course; but the amount and type of
>> work being supported does, and most home servers see request streams
>> very
>> different from commercial servers.
>
> If it was not clear, the performance concern is primarily for writes
> since zfs will load-share the writes across the available vdevs using
> an algorithm which also considers the write queue/backlog for each
> vdev.  If a vdev is slow, then it may be filled more slowly than the
> other vdevs.  This is also the reason why zfs encourages that all
> vdevs use the same organization.

As I said, I don't think of performance issues on mine.  So I wasn't
thinking of that particular detail, and it's good to call it out
explicitly.  If the performance of the new drives isn't adequate, then the
performance of the entire pool will become inadequate, it looks like.

I expect it's routine to have disks of different generations in the same
pool at this point (and if it isn't now, it will be in 5 years), just due
to what's available, replacing bad drives, and so forth.
-- 
David Dyer-Bennet, d...@dd-b.net; http://dd-b.net/
Snapshots: http://dd-b.net/dd-b/SnapshotAlbum/data/
Photos: http://dd-b.net/photography/gallery/
Dragaera: http://dragaera.info

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to