On Fri, 29 Aug 2008, Kyle McDonald wrote:
>> 
> What would one look for to decide what vdev to place each LUN?
>
> All mine have the same Current Load Balance value: round robin.

That is a good question and I will have to remind myself of the 
answer.  The "round robin" is good because that means that there are 
two working paths to the device.  There are two "Access State:" lines 
printed.  One is the status of the first path ('active' means used to 
transmit data), and the other is the status of the second path.  The 
controllers on the 2540 each "own" six of the drives by default (they 
operate active/standby at the drive level) so presumably (only an 
assumption) MPxIO directs traffic to the controller which has best 
access to the drive.

Assuming that you use a pool design which allows balancing, you would 
want to choose six disks which have 'active' in the first line, and 
six disks which have 'active' in the second line, and assure that your 
pool or vdev design takes advantage of this.

For example, my pool uses mirrored devices so I would split my mirrors 
so that one device is from the first set, and the other device is from 
the second set.  If you choose to build your pool with two raidz2s, 
then you could put all the devices active on the first fiber channel 
interface into the first raidz2, and the rest in the other.  This way 
you get balancing due to the vdev load sharing.  Another option with 
raidz2 is to make sure that half of the six disks are from each set so 
that writes to the vdev produce distributed load across the 
interfaces.  The reason why you might want to prefer load sharing at 
the vdev level is that if there is a performance problem with one 
vdev, the other vdev should still perform well and take more of the 
load.  The reason why you might want to load share within a vdev is 
that I/Os to the vdev might be more efficient.

Bob
======================================
Bob Friesenhahn
[EMAIL PROTECTED], http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,    http://www.GraphicsMagick.org/

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to