Hi,
Can anyone identify whether this is a known issue (perhaps 6667208) and
if the fix is going to be pushed out to Solaris 10 anytime soon? I'm
getting badly beaten up over this weekly, essentially anytime we drop a
packet between our twenty-odd iscsi-backed zones and the filer.
Chris was
On Wed, Nov 18, 2009 at 12:49 PM, Jacob Ritorto wrote:
> Tim Cook wrote:
>
> > Also, I never said anything about setting it to panic. I'm not sure why
> > you can't set it to continue while alerting you that a vdev has failed?
>
>
> Ah, right, thanks for the reminder Tim!
>
> Now I'd asked about
Tim Cook wrote:
> Also, I never said anything about setting it to panic. I'm not sure why
> you can't set it to continue while alerting you that a vdev has failed?
Ah, right, thanks for the reminder Tim!
Now I'd asked about this some months ago, but didn't get an answer so
forgive me for ask
On Wed, Nov 18, 2009 at 10:30 AM, Richard Elling
wrote:
> On Nov 18, 2009, at 5:44 AM, Jacob Ritorto wrote:
>
> Hi all,
>>Not sure if you missed my last response or what, but yes, the pool
>> is set to wait because it's one of many pools on this prod server and we
>> can't just panic ever
On Nov 18, 2009, at 5:44 AM, Jacob Ritorto wrote:
Hi all,
Not sure if you missed my last response or what, but yes, the pool
is set to wait because it's one of many pools on this prod server
and we can't just panic everything because one pool goes away.
I just need a way to reset
Hi all,
Not sure if you missed my last response or what, but yes, the pool is
set to wait because it's one of many pools on this prod server and we
can't just panic everything because one pool goes away.
I just need a way to reset one pool that's stuck.
If the architecture of zfs ca
On Mon, Nov 16, 2009 at 4:49 PM, Tim Cook wrote:
> Is your failmode set to wait?
Yes. This box has like ten prod zones and ten corresponding zpools
that initiate to iscsi targets on the filers. We can't panic the
whole box just because one {zone/zpool/iscsi target} fails. Are there
undocumente
On Nov 16, 2009, at 2:00 PM, Martin Vool wrote:
I already got my files back acctuay and the disc contains already
new pools, so i have no idea how it was set.
I have to make a virtualbox installation and test it.
Don't forget to change VirtualBox's default cache flush setting.
http://www.s
On Mon, Nov 16, 2009 at 4:00 PM, Martin Vool wrote:
> I already got my files back acctuay and the disc contains already new
> pools, so i have no idea how it was set.
>
> I have to make a virtualbox installation and test it.
> Can you please tell me how-to set the failmode?
>
>
>
http://prefetch
I already got my files back acctuay and the disc contains already new pools, so
i have no idea how it was set.
I have to make a virtualbox installation and test it.
Can you please tell me how-to set the failmode?
--
This message posted from opensolaris.org
__
On Mon, Nov 16, 2009 at 2:10 PM, Martin Vool wrote:
> I encountered the same problem...like i sed in the first post...zpool
> command freezes. Anyone knows how to make it respond again?
> --
>
>
Is your failmode set to wait?
--Tim
___
zfs-discuss maili
I encountered the same problem...like i sed in the first post...zpool command
freezes. Anyone knows how to make it respond again?
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolari
zpool for zone of customer-facing production appserver hung due to iscsi
transport errors. How can I {forcibly} reset this pool? zfs commands
are hanging and iscsiadm remove refuses.
r...@raadiku~[8]8:48#iscsiadm remove static-config
iqn.1986-03.com.sun:02:aef78e-955a-4072-c7f6-afe087723466
13 matches
Mail list logo