Ihsan,
If you are running Solaris 10 then you are probably hitting:
6456939 sd_send_scsi_SYNCHRONIZE_CACHE_biodone() can issue TUR which
calls biowait()and deadlock/hangs host
This was fixed in opensolaris (build 48) but a patch is not yet
available for Solaris 10.
Thanks,
George
Ihsan Do
Am 24.1.2007 15:49 Uhr, Michael Schuster schrieb:
>> I am going to create the same conditions here but with snv_55b and
>> then yank
>> a disk from my zpool. If I get a similar response then I will *hope*
>> for a
>> crash dump.
>>
>> You must be kidding about the "open a case" however. This is
Dennis Clarke wrote:
Ihsan Dogan wrote:
I think you hit a major bug in ZFS personally.
For me it also looks like a bug.
I think we don't have enough information to judge. If you have a supported
version of Solaris, open a case and supply all the data (crash dump!) you
have.
I agree we ne
> Am 24.1.2007 14:59 Uhr, Dennis Clarke schrieb:
>
>>> Jan 23 17:25:26 newponit genunix: [ID 408822 kern.info] NOTICE: glm0:
>>> fault detected in device; service still available
>>> Jan 23 17:25:26 newponit genunix: [ID 611667 kern.info] NOTICE: glm0:
>>> Disconnected tagged cmd(s) (1) timeout fo
> Ihsan Dogan wrote:
>
>>>I think you hit a major bug in ZFS personally.
>>
>> For me it also looks like a bug.
>
> I think we don't have enough information to judge. If you have a supported
> version of Solaris, open a case and supply all the data (crash dump!) you
> have.
I agree we need da
Am 24.1.2007 14:59 Uhr, Dennis Clarke schrieb:
>> Jan 23 17:25:26 newponit genunix: [ID 408822 kern.info] NOTICE: glm0:
>> fault detected in device; service still available
>> Jan 23 17:25:26 newponit genunix: [ID 611667 kern.info] NOTICE: glm0:
>> Disconnected tagged cmd(s) (1) timeout for Target
Hello,
Am 24.1.2007 14:49 Uhr, Jason Banham schrieb:
> The panic looks due to the fact that your SVM state databases aren't
> all there, so when we came to update one of them we found there
> was <= 50% of the state databases and crashed.
The metadbs are fine. I haven't touched them at all:
[EM
Ihsan Dogan wrote:
I think you hit a major bug in ZFS personally.
For me it also looks like a bug.
I think we don't have enough information to judge. If you have a supported
version of Solaris, open a case and supply all the data (crash dump!) you have.
HTH
--
Michael SchusterS
Hello,
Am 24.1.2007 14:40 Uhr, Dennis Clarke schrieb:
>> We're setting up a new mailserver infrastructure and decided, to run it
>> on zfs. On a E220R with a D1000, I've setup a storage pool with four
>> mirrors:
>
>Good morning Ihsan ...
>
>I see that you have everything mirrored here,
> Hello Michael,
>
> Am 24.1.2007 14:36 Uhr, Michael Schuster schrieb:
>
>>> --
>>> [EMAIL PROTECTED] # zpool status
>>> pool: pool0
>>> state: ONLINE
>>> scrub: none requested
>>> config:
>>
>> [...]
>>
>>> Jan 23 18:51:38 newponit ^
Hello Michael,
Am 24.1.2007 14:36 Uhr, Michael Schuster schrieb:
>> --
>> [EMAIL PROTECTED] # zpool status
>> pool: pool0
>> state: ONLINE
>> scrub: none requested
>> config:
>
> [...]
>
>> Jan 23 18:51:38 newponit ^Mpanic[cpu2]/th
Afternoon,
The panic looks due to the fact that your SVM state databases aren't
all there, so when we came to update one of them we found there
was <= 50% of the state databases and crashed.
This doesn't look like anything to do with ZFS.
I'd check the output from metadb and see if it looks like
> Hello,
>
> We're setting up a new mailserver infrastructure and decided, to run it
> on zfs. On a E220R with a D1000, I've setup a storage pool with four
> mirrors:
Good morning Ihsan ...
I see that you have everything mirrored here, thats excellent.
When you pulled a disk, was it a
Ihsan Dogan wrote:
Hello,
We're setting up a new mailserver infrastructure and decided, to run it
on zfs. On a E220R with a D1000, I've setup a storage pool with four
mirrors:
--
[EMAIL PROTECTED] # zpool status
pool: pool0
state: O
14 matches
Mail list logo