Brian Wilson wrote:
> On Jul 16, 2007, at 6:06 PM, Torrey McMahon wrote:
> > Darren Dunham wrote:
> >> My previous experience with powerpath was that it rode below the
> >> Solaris
> >> device layer. So you couldn't cause trespass by using the "wrong"
> >> device. It would just go to powerpath
There is an open issue/bug with ZFS and EMC PowerPath for Solaris 10 in x86/x64
space. My customer encountered the issue back in April 2007 and is awaiting
the patch. We're expecting an update (hopefully a patch) by the end of July
2007.
As I recall, it did involve CX arrays and "trespass" fu
On Jul 15, 2007, at 12:59 PM, Peter Tribble wrote:
> On 7/13/07, Torrey McMahon <[EMAIL PROTECTED]> wrote:
>> ZFS needs to use the top level multipath device or bad things will
>> probably happen in a failover or in initial zpool creation. Fopr
>> example: You'll try to use the device on two path
On Jul 16, 2007, at 6:06 PM, Torrey McMahon wrote:
Darren Dunham wrote:
If it helps at all. We're having a similar problem. Any LUN's
configured with their default owner to be SP B, don't get along
with
ZFS. We're running on a T2000, With Emulex cards and the ssd
driver.
MPXIO seems
Darren Dunham wrote:
>>> If it helps at all. We're having a similar problem. Any LUN's
>>> configured with their default owner to be SP B, don't get along with
>>> ZFS. We're running on a T2000, With Emulex cards and the ssd driver.
>>> MPXIO seems to work well for most cases, but the SAN g
> > If it helps at all. We're having a similar problem. Any LUN's
> > configured with their default owner to be SP B, don't get along with
> > ZFS. We're running on a T2000, With Emulex cards and the ssd driver.
> > MPXIO seems to work well for most cases, but the SAN guys are not
> > comf
Carisdad wrote:
> Peter Tribble wrote:
>
>> # powermt display dev=all
>> Pseudo name=emcpower0a
>> CLARiiON ID=APM00043600837 []
>> Logical device ID=600601600C4912003AB4B247BA2BDA11 [LUN 46]
>> state=alive; policy=CLAROpt; priority=0; queued-IOs=0
>> Owner: default=SP B, current=SP B
>>
On 7/13/07, Torrey McMahon <[EMAIL PROTECTED]> wrote:
> ZFS needs to use the top level multipath device or bad things will
> probably happen in a failover or in initial zpool creation. Fopr
> example: You'll try to use the device on two paths and cause a lun
> failover to occur.
>
> Mpxio fixes a l
On 7/15/07, JS <[EMAIL PROTECTED]> wrote:
>
> I run zfs (v2 and v3) on Emulex and Sun Branded emulex on SPARC with
> Powerpath 4.5.0(and MPxIOin other cases) and Clariion arrays and have never
> seen this problem. In fact I'm trying to get rid of my PowerPath instances
> and standardizing on MP
> Shows up as lpfc (is that Emulex?)
lpfc (or fibre-channel) is an Emulex branded emulex card device - sun branded
emulex uses the emlxs driver.
I run zfs (v2 and v3) on Emulex and Sun Branded emulex on SPARC with Powerpath
4.5.0(and MPxIOin other cases) and Clariion arrays and have never se
Peter Tribble wrote:
> # powermt display dev=all
> Pseudo name=emcpower0a
> CLARiiON ID=APM00043600837 []
> Logical device ID=600601600C4912003AB4B247BA2BDA11 [LUN 46]
> state=alive; policy=CLAROpt; priority=0; queued-IOs=0
> Owner: default=SP B, current=SP B
> =
> Doesn't that then create dependence on the cxtxdxsx device name to be
> available?
>
> /dev/dsk/c2t500601601020813Ed0s0 = path1
> /dev/dsk/c2t500601681020813Ed0s0 = path2
> /dev/dsk/emcpower0a = pseudo device pointing to both paths.
>
> So if you've got a zpool on /dev/dsk/c2t500601601020813E
[EMAIL PROTECTED] wrote:
>
>
>
> [EMAIL PROTECTED] wrote on 07/13/2007 02:21:52 PM:
>
>
>> Peter Tribble wrote:
>>
>>
>>> I've not got that far. During an import, ZFS just pokes around - there
>>> doesn't seem to be an explicit way to tell it which particular devices
>>> or SAN paths to use
Peter Tribble wrote:
> On 7/13/07, Alderman, Sean <[EMAIL PROTECTED]> wrote:
>
>> I wonder what kind of card Peter's using and if there is a potential
>> linkage there. We've got the Sun branded Emulux cards in our sparcs. I
>> also wonder if Peter were able to allocate an additional LUN to hi
D]; zfs-discuss@opensolaris.org
Subject: Re: [zfs-discuss] ZFS and powerpath
[EMAIL PROTECTED] wrote on 07/13/2007 02:21:52 PM:
> Peter Tribble wrote:
>
> > I've not got that far. During an import, ZFS just pokes around -
> > there doesn't seem to be an explicit way to t
[EMAIL PROTECTED] wrote on 07/13/2007 02:21:52 PM:
> Peter Tribble wrote:
>
> > I've not got that far. During an import, ZFS just pokes around - there
> > doesn't seem to be an explicit way to tell it which particular devices
> > or SAN paths to use.
>
> You can't tell it which devices to use
Peter Tribble wrote:
> I've not got that far. During an import, ZFS just pokes around - there
> doesn't seem to be an explicit way to tell it which particular devices
> or SAN paths to use.
You can't tell it which devices to use in a straightforward manner. But
you can tell it which directories
On 7/13/07, Alderman, Sean <[EMAIL PROTECTED]> wrote:
> I wonder what kind of card Peter's using and if there is a potential
> linkage there. We've got the Sun branded Emulux cards in our sparcs. I
> also wonder if Peter were able to allocate an additional LUN to his
> system whether or not he'd
On 7/13/07, Brian Wilson <[EMAIL PROTECTED]> wrote:
> Hm. How many devices/LUNS can the server see? I don't know how
> import finds the pools on the disk, but it sounds like it's not happy
> somehow. Is there any possibility it's seeing a Clariion mirror copy
> of the disks in the pool as we
my perspective being mostly a SAN noob
it's all
hearsay.
--
Sean M. Alderman
513.204.2704
-Original Message-
From: Brian Wilson [mailto:[EMAIL PROTECTED]
Sent: Friday, July 13, 2007 1:58 PM
To: Alderman, Sean
Cc: Peter Tribble; zfs-discuss@opensolaris.org
Subject: Re: [zfs-discuss] ZFS
On Jul 13, 2007, at 10:57 AM, Brian Wilson wrote:
> Hmm. Odd. I've got PowerPath working fine with ZFS with both
> Symmetrix and Clariion back ends.
> PowerPath Version is 4.5.0, running on leadville qlogic drivers.
> Sparc hardware. (if it matters)
>
> I ran one our test databases on ZFS
y.
--
Sean M. Alderman
513.204.2704
-Original Message-
From: Brian Wilson [mailto:[EMAIL PROTECTED]
Sent: Friday, July 13, 2007 1:58 PM
To: Alderman, Sean
Cc: Peter Tribble; zfs-discuss@opensolaris.org
Subject: Re: [zfs-discuss] ZFS and powerpath
Hmm. Odd. I've got PowerPath w
ED]
Sent: Friday, July 13, 2007 11:53 AM
To: Alderman, Sean
Cc: zfs-discuss@opensolaris.org
Subject: Re: [zfs-discuss] ZFS and powerpath
On 7/13/07, Alderman, Sean <[EMAIL PROTECTED]> wrote:
You wouldn't happen to be running this on a SPARC would you?
That I would.
I started a thr
er Tribble [mailto:[EMAIL PROTECTED]
Sent: Friday, July 13, 2007 11:53 AM
To: Alderman, Sean
Cc: zfs-discuss@opensolaris.org
Subject: Re: [zfs-discuss] ZFS and powerpath
On 7/13/07, Alderman, Sean <[EMAIL PROTECTED]> wrote:
> You wouldn't happen to be running this on a SPARC would
On 7/13/07, Alderman, Sean <[EMAIL PROTECTED]> wrote:
> You wouldn't happen to be running this on a SPARC would you?
That I would.
> I started a thread last week regarding CLARiiON+ZFS+SPARC = core dump
> when creating a zpool. I filed a bug report, though it doesn't appear
> to be in the databa
rg; [EMAIL PROTECTED]
Subject: Re: [zfs-discuss] ZFS and powerpath
On 7/13/07, [EMAIL PROTECTED] <[EMAIL PROTECTED]> wrote:
>
> Can you post a "powermt display dev=all", a zpool status and format
> command?
Sure.
There are no pools to give status on because I can'
On 7/13/07, [EMAIL PROTECTED] <[EMAIL PROTECTED]> wrote:
>
> Can you post a "powermt display dev=all", a zpool status and format
> command?
Sure.
There are no pools to give status on because I can't import them.
For the others:
# powermt display dev=all
Pseudo name=emcpower0a
CLARiiON ID=APM0004
Can you post a "powermt display dev=all", a zpool status and format
command?
[EMAIL PROTECTED] wrote on 07/13/2007 09:38:01 AM:
> How much fun can you have with a simple thing like powerpath?
>
> Here's the story: I have a (remote) system with access to a couple
> of EMC LUNs. Originally,
28 matches
Mail list logo