Constantin Gonzalez wrote:
Hi Eric,
This means that we have one pool with 3 vdevs that access up to 3
different
sliced on the same physical disk.
minor correction: 1 pool, 3 vdevs, 3 slices per disk on 4 disks.
Question: Does ZFS consider the underlying physical disks when
load-balancing
or does it only load-balance across vdevs thereby potentially overloading
physical disks with up to 3 parallel requests per physical disk at once?
ZFS only does dynamic striping across the (top-level) vdevs.
I understand why you setup your pool that way, but ZFS really likes
whole disks instead of slices.
ok, understood. When I run out of storage, I'll try to get 4 cheap SATA
drives of equal size and migrate all over.
Trying to interpret that the devices are really slices and part of other
vdevds seems overly complicated for the gain achieved.
So what data does ZFS base it's dynamic stripig on? Does it count IOPs per
vdev or does it try to sense the load on the vdevs by measuring, say response
times, queue leghts etc.?
Its currently done by capacity. We're planning on adding the ability to
factor in "speed" of the device (so a slower drive would get less work
compared to a faster drive).
eric
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss