Dear all
I asked before but without much feedback. As the issue
is persistent I want to give it another try. We disabled
panicing for such kind of error in /etc/system but still
see messages such as
zfs: accessing past end of object 5b1aa/21a8008 (size=60416 access=32603+32768)
in the logs. Is th
Thank you, it was the NFS ACL I had wrong! Fixed now and working on all 3
nodes. I changed below and it works now, very simple can't believe I missed
that
zfs get sharenfs
pool1/nas/vol1 sharenfs rw,nosuid,root=192.168.1.52 local
zfs get sharenfs
pool1/nas/vol1 sharenfs rw,nosuid,root=192.
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Geoff Nordli
>
> trying to figure out a reliable way to identify drives to make sure I pull
the
> right drive when there is a failure. These will be smaller installations
http://support.orac
On 2/03/12 09:11 AM, Geoff Nordli wrote:
trying to figure out a reliable way to identify drives to make sure I pull the
right drive when there is a failure. These will be smaller installations
(<16 drives)
I am pretty sure the wwn name on a sas device is preassigned like a MAC
address, but I
trying to figure out a reliable way to identify drives to make sure I pull the
right drive when there is a failure. These will be smaller installations
(<16 drives)
I am pretty sure the wwn name on a sas device is preassigned like a MAC
address, but I just want to make sure. Is there any sc