Hi Tim,

One switch environment, two ports going to the host, 4 ports going to
the storage. Switch is a Brocade SilkWorm 3850 and the HBA is a
dual-port QLA2342. Solaris rev is S10 update 3. Array is a StorageTek
FLX210 (Engenio 2884)

The LUNs had moved to the other controller and MPXIO had shown the
paths change as a result, so it was a bit bizarre. Rebooting the other
controller shouldn't have done anything, but it did. Could have been
the array.

-J

On 12/22/06, Tim Cook <[EMAIL PROTECTED]> wrote:
Always good to hear others experiences J.  Maybe I'll try firing up the
Nexan today and downing a controller to see how that affects it vs.
downing a switch port/pulling cable.  My first intuition is time-out
values.  A cable pull will register differently than a blatant time-out
depending on where it occurs.  IE: Pulling the cable from the back of
the server will register instantly, vs. the storage timing out 3
switches away.  I'm sure you're aware of that, but just an FYI for
others following the thread less familiar with SAN technology.

To get a little more background:

What kind of an array is it?

How do you have the controllers setup?  Active/active?  Active/passive?
In other words do you have array side failover occurring as well or is
it in *dummy mode*?

Do you have multiple physical paths?  IE: each controller port and each
server port hitting different switches?

What HBA's are you using?  What switches?

What version of snv are you running, and which driver?

Yey for slow Friday's before x-mas, I have a bit of time to play in the
lab today.

--Tim

-----Original Message-----
From: Jason J. W. Williams [mailto:[EMAIL PROTECTED]
Sent: Friday, December 22, 2006 10:56 AM
To: Tim Cook
Cc: Shawn Joy; zfs-discuss@opensolaris.org
Subject: Re: [zfs-discuss] Re: Difference between ZFS and UFS with one
LUN froma SAN

Just for what its worth, when we rebooted a controller in our array
(we pre-moved all the LUNs to the other controller), despite using
MPXIO ZFS kernel panicked. Verified that all the LUNs were on the
correct controller when this occurred. Its not clear why ZFS thought
it lost a LUN but it did. We have done cable pulling using ZFS/MPXIO
before and that works very well. It may well be array-related in our
case, but I hate anyone to have a false sense of security.

-J

On 12/22/06, Tim Cook <[EMAIL PROTECTED]> wrote:
> This may not be the answer you're looking for, but I don't know if
it's
> something you've thought of.  If you're pulling a LUN from an
expensive
> array, with multiple HBA's in the system, why not run mpxio?  If you
ARE
> running mpxio, there shouldn't be an issue with a path dropping.  I
have
> the setup above in my test lab and pull cables all the time and have
yet
> to see a zfs kernel panic.  Is this something you've considered?  I
> haven't seen the bug in question, but I definitely have not run into
it
> when running mpxio.
>
> --Tim
>
> -----Original Message-----
> From: [EMAIL PROTECTED]
> [mailto:[EMAIL PROTECTED] On Behalf Of Shawn Joy
> Sent: Friday, December 22, 2006 7:35 AM
> To: zfs-discuss@opensolaris.org
> Subject: [zfs-discuss] Re: Difference between ZFS and UFS with one LUN
> froma SAN
>
> OK,
>
> But lets get back to the original question.
>
> Does ZFS provide you with less features than UFS does on one LUN from
a
> SAN (i.e is it less stable).
>
> >ZFS on the contrary checks every block it reads and is able to find
the
> >mirror
> >or reconstruct the data in a raidz config.
> >Therefore ZFS uses only valid data and is able to repair the data
> blocks
> >automatically.
> >This is not possible in a traditional filesystem/volume manager
> >configuration.
>
> The above is fine. If I have two LUNs. But my original question was if
I
> only have one LUN.
>
> What about kernel panics from ZFS if for instance access to one
> controller goes away for a few seconds or minutes. Normally UFS would
> just sit there and warn I have lost access to the controller. Then
when
> the controller returns, after a short period, the warnings go away and
> the LUN continues to operate. The admin can then research further into
> why the controller went away. With ZFS, the above will panic the
system
> and possibly cause other coruption  on other LUNs due to this panic? I
> believe this was discussed in other threads? I also believe there is a
> bug filed against this? If so when should we expect this bug to be
> fixed?
>
>
> My understanding of ZFS is that it functions better in an environment
> where we have JBODs attached to the hosts. This way ZFS takes care of
> all of the redundancy? But what about SAN enviroments where customers
> have spend big money to invest in storage. I know of one instance
where
> a customer has a growing need for more storage space. There environemt
> uses many inodes. Due to the UFS inode limitation, when creating LUNs
> over one TB, they would have to quadrulpe the about of storage usesd
in
> there SAN in order to hold all of the files. A possible solution to
this
> inode issue would be ZFS. However they have experienced kernel panics
in
> there environment when a controller dropped of line.
>
> Any body have a solution to this?
>
> Shawn
>
>
> This message posted from opensolaris.org
> _______________________________________________
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
> _______________________________________________
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to