Robert Milkowski wrote:
Hello Richard,
Tuesday, December 5, 2006, 7:01:17 AM, you wrote:
RE> Dale Ghent wrote:
Similar to UFS's onerror mount option, I take it?
RE> Actually, it would be interesting to see how many customers change the
RE> onerror setting. We have some data, just need more
Hello Richard,
Tuesday, December 5, 2006, 7:01:17 AM, you wrote:
RE> Dale Ghent wrote:
>>
>> Similar to UFS's onerror mount option, I take it?
RE> Actually, it would be interesting to see how many customers change the
RE> onerror setting. We have some data, just need more days in the hour.
Som
Richard Elling wrote:
Actually, it would be interesting to see how many customers change the
onerror setting. We have some data, just need more days in the hour.
I'm pretty sure you'd find that info in over 6 years of submitted
Explorer output :)
I imagine that stuff is sandboxed away in a
Dale Ghent wrote:
Matthew Ahrens wrote:
Jason J. W. Williams wrote:
Hi all,
Having experienced this, it would be nice if there was an option to
offline the filesystem instead of kernel panicking on a per-zpool
basis. If its a system-critical partition like a database I'd prefer
it to kernel-pa
Matthew Ahrens wrote:
Jason J. W. Williams wrote:
Hi all,
Having experienced this, it would be nice if there was an option to
offline the filesystem instead of kernel panicking on a per-zpool
basis. If its a system-critical partition like a database I'd prefer
it to kernel-panick and thereby tr
Any chance we might get a short refresher warning when creating a
striped zpool? O:-)
Best Regards,
Jason
On 12/4/06, Matthew Ahrens <[EMAIL PROTECTED]> wrote:
Jason J. W. Williams wrote:
> Hi all,
>
> Having experienced this, it would be nice if there was an option to
> offline the filesystem
Jason J. W. Williams wrote:
Hi all,
Having experienced this, it would be nice if there was an option to
offline the filesystem instead of kernel panicking on a per-zpool
basis. If its a system-critical partition like a database I'd prefer
it to kernel-panick and thereby trigger a fail-over of th
Hi all,
Having experienced this, it would be nice if there was an option to
offline the filesystem instead of kernel panicking on a per-zpool
basis. If its a system-critical partition like a database I'd prefer
it to kernel-panick and thereby trigger a fail-over of the
application. However, if it
Douglas Denny wrote:
On 12/4/06, James C. McPherson <[EMAIL PROTECTED]> wrote:
> Is this normal behavior for ZFS?
Yes. You have no redundancy (from ZFS' point of view at least),
so ZFS has no option except panicing in order to maintain the
integrity of your data.
This is interesting from a i
If you take a look at these messages the somewhat unusual condition
that may lead to unexpected behaviour (ie. fast giveup) is that
whilst this is a SAN connection it is achieved through a non-
Leadville config, note the fibre-channel and sd references. In a
Leadville compliant installation
Douglas Denny wrote:
On 12/4/06, James C. McPherson <[EMAIL PROTECTED]> wrote:
If you look into your /var/adm/messages file, you should see
more than a few seconds' worth of IO retries, indicating that
there was a delay before panicing while waiting for the device
to return.
My original post c
On 12/4/06, James C. McPherson <[EMAIL PROTECTED]> wrote:
If you look into your /var/adm/messages file, you should see
more than a few seconds' worth of IO retries, indicating that
there was a delay before panicing while waiting for the device
to return.
My original post contains all the warnin
Douglas Denny wrote:
On 12/4/06, James C. McPherson <[EMAIL PROTECTED]> wrote:
> Is this normal behavior for ZFS?
Yes. You have no redundancy (from ZFS' point of view at least),
so ZFS has no option except panicing in order to maintain the
integrity of your data.
This is interesting from a i
On 12/4/06, James C. McPherson <[EMAIL PROTECTED]> wrote:
> Is this normal behavior for ZFS?
Yes. You have no redundancy (from ZFS' point of view at least),
so ZFS has no option except panicing in order to maintain the
integrity of your data.
This is interesting from a implementation point of
Douglas Denny wrote:
> Last Friday, one of our V880s kernel panicked with the following
> message.This is a SAN connected ZFS pool attached to one LUN. From
> this, it appears that the SAN 'disappeared' and then there was a panic
> shortly after.
>
> Am I reading this correctly?
Yes.
> Is this
Last Friday, one of our V880s kernel panicked with the following
message.This is a SAN connected ZFS pool attached to one LUN. From
this, it appears that the SAN 'disappeared' and then there was a panic
shortly after.
Am I reading this correctly?
Is this normal behavior for ZFS?
This is a mostl
16 matches
Mail list logo