Last Friday, one of our V880s kernel panicked with the following
message.This is a SAN connected ZFS pool attached to one LUN. From
this, it appears that the SAN 'disappeared' and then there was a panic
shortly after.
Am I reading this correctly?
Is this normal behavior for ZFS?
This is a mostl
Douglas Denny wrote:
> Last Friday, one of our V880s kernel panicked with the following
> message.This is a SAN connected ZFS pool attached to one LUN. From
> this, it appears that the SAN 'disappeared' and then there was a panic
> shortly after.
>
> Am I reading this correctly?
Yes.
> Is this
On 12/4/06, James C. McPherson <[EMAIL PROTECTED]> wrote:
> Is this normal behavior for ZFS?
Yes. You have no redundancy (from ZFS' point of view at least),
so ZFS has no option except panicing in order to maintain the
integrity of your data.
This is interesting from a implementation point of
Douglas Denny wrote:
On 12/4/06, James C. McPherson <[EMAIL PROTECTED]> wrote:
> Is this normal behavior for ZFS?
Yes. You have no redundancy (from ZFS' point of view at least),
so ZFS has no option except panicing in order to maintain the
integrity of your data.
This is interesting from a i
On 12/4/06, James C. McPherson <[EMAIL PROTECTED]> wrote:
If you look into your /var/adm/messages file, you should see
more than a few seconds' worth of IO retries, indicating that
there was a delay before panicing while waiting for the device
to return.
My original post contains all the warnin
Douglas Denny wrote:
On 12/4/06, James C. McPherson <[EMAIL PROTECTED]> wrote:
If you look into your /var/adm/messages file, you should see
more than a few seconds' worth of IO retries, indicating that
there was a delay before panicing while waiting for the device
to return.
My original post c
If you take a look at these messages the somewhat unusual condition
that may lead to unexpected behaviour (ie. fast giveup) is that
whilst this is a SAN connection it is achieved through a non-
Leadville config, note the fibre-channel and sd references. In a
Leadville compliant installation
Douglas Denny wrote:
On 12/4/06, James C. McPherson <[EMAIL PROTECTED]> wrote:
> Is this normal behavior for ZFS?
Yes. You have no redundancy (from ZFS' point of view at least),
so ZFS has no option except panicing in order to maintain the
integrity of your data.
This is interesting from a i
I am having no luck replacing my drive as well. few days ago I replaced my drive
and its completly messed up now.
pool: mypool2
state: DEGRADED
status: One or more devices is currently being resilvered. The pool will
continue to function, possibly in a degraded state.
action: Wait fo
Hi all,
Having experienced this, it would be nice if there was an option to
offline the filesystem instead of kernel panicking on a per-zpool
basis. If its a system-critical partition like a database I'd prefer
it to kernel-panick and thereby trigger a fail-over of the
application. However, if it
On Mon, 2006-12-04 at 13:56 -0500, Krzys wrote:
> mypool2/[EMAIL PROTECTED] 34.4M - 151G -
> mypool2/[EMAIL PROTECTED] 141K - 189G -
> mypool2/d3 492G 254G 11.5G legacy
>
> I am so confused with all of this... Why its taking so long to replace that
> one
Hi all
Sorry if my question is not very clear, I'm not very familiar with ZFS (why
I ask this question).
Suppose I've lot of low-cost raid array disk (like Brownie meaning IDE/SATA
disk)) all in SCSI attachement (lot of ~ 10 and the sum of space is ~ 20 To).
Now if I buy some «high» level big ra
Jason J. W. Williams wrote:
Hi all,
Having experienced this, it would be nice if there was an option to
offline the filesystem instead of kernel panicking on a per-zpool
basis. If its a system-critical partition like a database I'd prefer
it to kernel-panick and thereby trigger a fail-over of th
> If you take a look at these messages the somewhat unusual condition
> that may lead to unexpected behaviour (ie. fast giveup) is that
> whilst this is a SAN connection it is achieved through a non-
> Leadville config, note the fibre-channel and sd references. In a
> Leadville compliant instal
Any chance we might get a short refresher warning when creating a
striped zpool? O:-)
Best Regards,
Jason
On 12/4/06, Matthew Ahrens <[EMAIL PROTECTED]> wrote:
Jason J. W. Williams wrote:
> Hi all,
>
> Having experienced this, it would be nice if there was an option to
> offline the filesystem
You cannot make big returns on an oil company AFTER huge
profits are reported. You also can't make them by getting
in AFTER successful drilling results. Everyone needs a
helping hand at getting in BEFORE the big events, and
that's what we are giving you here.
Great product, great sector, tigh
Peter Eriksson wrote:
If you take a look at these messages the somewhat unusual condition
that may lead to unexpected behaviour (ie. fast giveup) is that whilst
this is a SAN connection it is achieved through a non- Leadville
config, note the fibre-channel and sd references. In a Leadville
compl
Matthew Ahrens wrote:
Jason J. W. Williams wrote:
Hi all,
Having experienced this, it would be nice if there was an option to
offline the filesystem instead of kernel panicking on a per-zpool
basis. If its a system-critical partition like a database I'd prefer
it to kernel-panick and thereby tr
It is possible to configure ZFS in the way you describe, but your performance
will be limited by the older array.
All mirror writes have to be stored on both arrays before they are considered
complete, so writes will be as slow as the slowest disk or array involved.
ZFS does not currently consi
> And to panic? How can that in any sane way be good
> way to "protect" the application?
> *BANG* - no chance at all for the application to
> handle the problem...
I agree -- a disk error should never be fatal to the system; at worst, the file
system should appear to have been forcibly unmounted
Anton B. Rang wrote:
>Peter Eriksson wrote:
And to panic? How can that in any sane way be good way to "protect" the
application? *BANG* - no chance at all for the application to handle
the problem...
I agree -- a disk error should never be fatal to the system; at worst,
the file system should a
Anton B. Rang wrote:
And to panic? How can that in any sane way be good
way to "protect" the application?
*BANG* - no chance at all for the application to
handle the problem...
I agree -- a disk error should never be fatal to the system; at worst, the file system
should appear to have been for
Dale Ghent wrote:
Matthew Ahrens wrote:
Jason J. W. Williams wrote:
Hi all,
Having experienced this, it would be nice if there was an option to
offline the filesystem instead of kernel panicking on a per-zpool
basis. If its a system-critical partition like a database I'd prefer
it to kernel-pa
Richard Elling wrote:
Actually, it would be interesting to see how many customers change the
onerror setting. We have some data, just need more days in the hour.
I'm pretty sure you'd find that info in over 6 years of submitted
Explorer output :)
I imagine that stuff is sandboxed away in a
Hi All,
I am new to solaris. Please clarify me on the following questions.
1) On Linux to know the presence of ext2/ext3 file systems on a device we use
tune2fs command. Similar to tune2fs command is there any command to know the
presence of ZFS file system on a device ?
2) When
Hi Mastan,
On Dec 4, 2006, at 11:13 PM, dudekula mastan wrote:
Hi All,
I am new to solaris. Please clarify me on the following questions.
1) On Linux to know the presence of ext2/ext3 file systems on a
device we use tune2fs command. Similar to tune2fs command is there
any command to know
> 1) On Linux to know the presence of ext2/ext3 file systems on a device we
> use tune2fs command. Similar to tune2fs command is there any command to know
> the presence of ZFS file system on a device ?
>
You can use 'zpool import' to check normal disk devices, or give an
optional list of
27 matches
Mail list logo