Hi Eric,
Hard to say. I'll use MDB next time it happens for more info. The
applications using any zpool lock up.
-J
On Jan 3, 2008 3:33 PM, Eric Schrock <[EMAIL PROTECTED]> wrote:
> When you say "starts throwing sense errors," does that mean every I/O to
> the drive will fail, or some arbitrary
Scott L. Burson wrote:
> Hi,
>
> This is in build 74, on x64, on a Tyan S2882-D with dual Opteron 275 and 24GB
> of ECC DRAM.
>
>
Not an answer, but zfs-discuss is probably the best place to ask, so
I've taken the liberty of CCing that list.
> I seem to have lost the entire contents of a ZFS r
James C. McPherson wrote:
>
> The ws command hates it - "hmm, the underlying device for
> /scratch is /scratch maybe if I loop around stat()ing
> it it'll turn into a pumpkin"
>
> :-)
>
>
>
As does dmake, which is a real PITA for a developer!
Ian
___
> space_map_add+0xdb(ff014c1a21b8, 472785000, 1000)
> space_map_load+0x1fc(ff014c1a21b8, fbd52568, 1,
ff014c1a1e88, ff0149c88c30)
> running snv79.
hmm.. did you spend any time in snv_74 or snv_75 that might
have gotten http://bugs.opensolaris.org/view_bug.do?bug_id=660
Joerg Schilling wrote:
Carsten Bormann <[EMAIL PROTECTED]> wrote:
On Dec 29 2007, at 08:33, Jonathan Loran wrote:
We snapshot the file as it exists at the time of
the mv in the old file system until all referring file handles are
closed, then destroy the single file snap. I know, n
When you say "starts throwing sense errors," does that mean every I/O to
the drive will fail, or some arbitrary percentage of I/Os will fail? If
it's the latter, ZFS is trying to do the right thing by recognizing
these as transient errors, but eventually the ZFS diagnosis should kick
in. What doe
This should be pretty much fixed on build 77. It will lock up for the
duration of a single command timeout, but ZFS should recover quickly
without queueing up additional commands. Since the default timeout is
60 seconds, and we retry 3 times, and we do a probe afterwards, you may
see hangs of up
Hi Eric,
I'd really like to suggest a helpful idea, but all I can suggest is an
end result. Running ZFS on top of STK arrays doing the RAID, they
offline their bad disks very quickly and the applications never
notice. In the X4500s, ZFS times out and locks up the applications. If
ZFS is going to b
Hi Albert,
Thank you for the link. ZFS isn't offlining the disk in b77.
-J
On Jan 3, 2008 3:07 PM, Albert Chin
<[EMAIL PROTECTED]> wrote:
>
> On Thu, Jan 03, 2008 at 02:57:08PM -0700, Jason J. W. Williams wrote:
> > There seems to be a persistent issue we have with ZFS where one of the
> > SATA
On Thu, Jan 03, 2008 at 02:57:08PM -0700, Jason J. W. Williams wrote:
> There seems to be a persistent issue we have with ZFS where one of the
> SATA disk in a zpool on a Thumper starts throwing sense errors, ZFS
> does not offline the disk and instead hangs all zpools across the
> system. If it is
Hello,
There seems to be a persistent issue we have with ZFS where one of the
SATA disk in a zpool on a Thumper starts throwing sense errors, ZFS
does not offline the disk and instead hangs all zpools across the
system. If it is not caught soon enough, application data ends up in
an inconsistent s
We loaded Nevada_78 on a peer T2000 unit. Imported the same ZFS pool. I
didn't even upgrade the pool since we wanted to be able to move it back to
10u4. Cut 'n paste of my colleague's email with the results:
Here's the latest Pepsi Challenge results.
Sol10u4 vs Nevada78. Same tuning options,
I'm seeing this too. Nothing unusual happened before the panic.
Just a shutdown (init 5) and later startup. I have the crashdump
and copy of the problem zpool (on swan). Here's the stack trace:
> $C
ff0004463680 vpanic()
ff00044636b0 vcmn_err+0x28(3, f792ecf0, ff0004463778)
In general you should not allow a Solaris system to be both an NFS server and
NFS client for the same filesystem, irrespective of whether zones are involved.
Among other problems, you can run into kernel deadlocks in some (rare)
circumstances. This is documented in the NFS administration docs. A
Hi,
> Do you have snapshots taking place (like in a cron job) during the
> resilver process? If so, you may be hitting a bug that the resilver
> will restart from the beginning whenever a new snapshot occurs. If
> you disable the snapshots during the resilver then it should complete
> to 100%.
Carsten Bormann <[EMAIL PROTECTED]> wrote:
> On Dec 29 2007, at 08:33, Jonathan Loran wrote:
>
> > We snapshot the file as it exists at the time of
> > the mv in the old file system until all referring file handles are
> > closed, then destroy the single file snap. I know, not easy to
> > impleme
On Dec 29 2007, at 08:33, Jonathan Loran wrote:
> We snapshot the file as it exists at the time of
> the mv in the old file system until all referring file handles are
> closed, then destroy the single file snap. I know, not easy to
> implement, but that is the correct behavior, I believe.
Exact
17 matches
Mail list logo