Vincent Fox wrote:
> Followup, my initiator did eventually panic.
>
> I will have to do some setup to get a ZVOL from another system to mirror
> with, and see what happens when one of them goes away. Will post in a day or
> two on that.
>
>
On Sol 10 U4, I could have told you that. A few
kristof wrote:
> If you have a mirrored iscsi zpool. It will NOT panic when 1 of the
> submirrors is unavailable.
>
> zpool status will hang for some time, but after I thinkt 300 seconds it will
> put the device on unavailable.
>
> The panic was the default in the past, And it only occurs if all
To my mind it's a big limitation of ZFS that it relies on the driver timeouts.
The driver has no knowledge of what kind of configuration the disks are in, and
generally any kind of data loss is bad, so it's not unexpected to see that long
timeouts are the norm as the driver does it's very best
Followup, my initiator did eventually panic.
I will have to do some setup to get a ZVOL from another system to mirror with,
and see what happens when one of them goes away. Will post in a day or two on
that.
This message posted from opensolaris.org
__
Followup, my initiator did eventually panic.
I will have to do some setup to get a ZVOL from another system to mirror with,
and see what happens when one of them goes away. Will post in a day or two on
that.
This message posted from opensolaris.org
__
| You DO mean IPMP then. That's what I was trying to sort out, to make
| sure th at you were talking about the IP part of things, the iSCSI
| layer.
My apologies for my lack of clarity. We are not looking at IPMP
multipathing; we are using MPxIO multipathing (mpathadm et al), which
operates at w
Vincent Fox wrote:
> You DO mean IPMP then. That's what I was trying to sort out, to make sure
> that you were talking about the IP part of things, the iSCSI layer. And not
> the paths from the "target" system to it's local storage.
>
There is more than one way to skin this cat. Fortunatel
I don't think ANY situation in which you are mirrored and one half of the
mirror pair becomes unavailable will panic the system. At least this has been
the case when I've tested with local storage haven't tried with iSCSI yet but
will give it a whirl.
I had a simple single ZVOL shared over iSC
On Fri, Apr 4, 2008 at 10:53 PM, Marc Bevand <[EMAIL PROTECTED]> wrote:
> > with him, and I noticed that there are BIOS settings for the pcie max
> > payload size. The default value is 4096 bytes.
>
> I noticed. But it looks like this setting has no effect on anything
> whatsoever.
My guess
Oh sure pick nits. Yeah I should have said "network multipath" instead of
"ethernet multipath" but really how often do I encounter non-ethernet networks?
I can't recall the last time I saw a token ring or anything else.
This message posted from opensolaris.org
__
You DO mean IPMP then. That's what I was trying to sort out, to make sure that
you were talking about the IP part of things, the iSCSI layer. And not the
paths from the "target" system to it's local storage.
You say "non-ethernet" for your network transport, what ARE you using?
This messag
If you have a mirrored iscsi zpool. It will NOT panic when 1 of the submirrors
is unavailable.
zpool status will hang for some time, but after I thinkt 300 seconds it will
put the device on unavailable.
The panic was the default in the past, And it only occurs if all devices are
unavailable.
Vincent Fox wrote:
> I assume you mean IPMP here, which refers to ethernet multipath.
>
No. IPMP is IP multipathing. You can run IP over almost anything,
even cups-n-string :-)
-- richard
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
ht
On Sat, Apr 5, 2008 at 5:25 AM, Jonathan Loran <[EMAIL PROTECTED]> wrote:
> This is scaring the heck out of me. I have a project to create a zpool
> mirror out of two iSCSI targets, and if the failure of one of them will
> panic my system, that will be totally unacceptable.
I haven't tried this
14 matches
Mail list logo