On Sun, Jun 22, 2008 at 2:06 PM, Cesare <[EMAIL PROTECTED]> wrote: > Hy, > > I'm facing to a problem where I configure and create a zpool on my > test bed. The hardware is: T-5120 with Solaris10 with latest patch and > Clariion CX3 attached by 2 HBA. In this type of configuration every > LUN exported by Clariion is viewed 4 times by operating system. > > If I configure the latest disk by using a controller the "zfs create" > doesn't working telling me that there is a devices currently > unavailable. If I'll use a different controller (but is the same LUN > from the Clariion) I'll not encountered the problem and the raidz pool > is created. I'm willing to use that controller for balance the I/O > between HBA and storage processor.
My experience is that zfs + powerpath + clariion doesn't work. (Try a 'zpool export' followed by 'zpool import' - do you get your pool back?) For this I've had to get rid of powerpath and use mpxio instead. The problem seems to be that the clariion arrays are active/passive and zfs trips up if it tries to use one of the passive links. Using mpxio hides this and works fine. And powerpath on the (active/active) DMX-4 seems to be OK too. -- -Peter Tribble http://www.petertribble.co.uk/ - http://ptribble.blogspot.com/ _______________________________________________ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss