Hi Tom and all,
Tom Bird wrote:
Hi,
Have a problem with a ZFS on a single device, this device is 48 1T SATA
drives presented as a 42T LUN via hardware RAID 6 on a SAS bus which had
a ZFS on it as a single device.
There was a problem with the SAS bus which caused various errors
including the inevitable kernel panic, the thing came back up with 3 out
of 4 zfs mounted.
It would be nice to see a panic stack.
I've tried reading the partition table with format, works fine, also can
dd the first 100G from the device quite happily so the communication
issue appears resolved however the device just won't mount. Googling
around I see that ZFS does have features designed to reduce the impact
of corruption at a particular point, multiple meta data copies and so
on, however commands to help me tidy up a zfs will only run once the
thing has been mounted.
Would be grateful for any ideas, relevant output here:
[EMAIL PROTECTED]:~# zpool import
pool: content
id: 14205780542041739352
state: FAULTED
status: The pool metadata is corrupted.
action: The pool cannot be imported due to damaged devices or data.
The pool may be active on on another system, but can be imported
using
the '-f' flag.
see: http://www.sun.com/msg/ZFS-8000-72
config:
content FAULTED corrupted data
c2t9d0 ONLINE
[EMAIL PROTECTED]:~# zpool import content
cannot import 'content': pool may be in use from other system
use '-f' to import anyway
[EMAIL PROTECTED]:~# zpool import -f content
cannot import 'content': I/O error
As long as it does not panic and just returns I/O error which is rather
generic, you may try to dig a little bit deeper with DTrace to have a
chance to see where this I/O error is generated first, e.g. something
like this with the attached dtrace script:
dtrace -s /path/to/script -c "zpool import -f content"
It is also interesting what impact SAS bus problem had on the storage
controller. Btw, what is storage controller in question here?
[EMAIL PROTECTED]:~# uname -a
SunOS cs3.kw 5.10 Generic_127127-11 sun4v sparc SUNW,Sun-Fire-T200
Btw, have you considered opening support call for this issue?
hth,
victor
#!/usr/sbin/dtrace -s
#
#pragma D option flowindent
#pragma D option quiet
syscall::ioctl:entry
/execname=="zpool"/
{
self->trace = 1;
}
fbt:zfs::entry
/self->trace/
{
printf("\n");
}
fbt:zfs::return
/self->trace/
{
printf(" = %d\n", arg1);
}
syscall::ioctl:return
{
self->trace = 0;
}
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss