Ewen Chan wrote:
(with help from Robert)
Yes, there are files.
# pwd
/var/crash/FILESERVER
# ls -F
boundsunix.0unix.1 vmcore.0 vmcore.1
# mdb 0
Loading modules: [ unix krtld genunix specfs dtrace cpu.AuthenticAMD.15 ufs ip
sctp usba fctl nca lofs random zfs nfs sppp ptm cpc fci
How do I do that?
I have the system messages (recorded by syslogd) and from there; you can kind
of tell roughly the time period as to when things went wrong.
If anybody from anywhere (Sun, ZFS, Solaris, etc.) wants to take a look at the
unix.* and vmcore.* data; in addition to any logs or syste
The scan order won't make any difference to ZFS, as it identifies the drives by
a label written to them, rather than by their controller path.
Perhaps someone in ZFS support could analyze the panic to determine the cause,
or look at the disk labels; have you made the core file available to Sun?
P.S. I don't know if it makes any difference, but I did find that the scan
order has changed somewhat.
For example, right now, it starts the scan for the drives from sd10 (i.e.
[EMAIL PROTECTED],0) whereas before; the drive scan started with sd1 (i.e.
[EMAIL PROTECTED],0).
Would it make a diff
In the instructions, it says that the system retains a copy of the zpool cache
in /etc/zfs/zpool.cache.
It also said that when the system boots up, it looks to that to try and mount
the pool, so to get out of the panic-reboot look, it said to delete that file.
Well, I retained a copy of it befo
Ewen Chan wrote:
::status
debugging crash dump vmcore.0 (64-bit) from unknown
operating system: 5.10 Generic_118855-14 (i85pc)
panic message:
assertion failed: 0 == dmu_buf_hold_array(os, object, offset, size, FALSE, FTAG,
&numbufs, &dbp), file : ../../common/fs/zfs/dmu.c, line: 366
dump conten
(with help from Robert)
Yes, there are files.
# pwd
/var/crash/FILESERVER
# ls -F
boundsunix.0unix.1 vmcore.0 vmcore.1
# mdb 0
Loading modules: [ unix krtld genunix specfs dtrace cpu.AuthenticAMD.15 ufs ip
sctp usba fctl nca lofs random zfs nfs sppp ptm cpc fcip ]
> ::status
debu