And for the same zpool, the same issue observed when I tried to import this
zpool, and I encountered core dump also:
bash-3.00# zpool import ttt
internal error: Value too large for defined data type
Abort (core dumped)
bash-3
zdb zpool output as below:
bash-3.00# zdb ttt
version=15
name='ttt'
state=0
txg=4
pool_guid=4724975198934143337
hostid=69113181
hostname='cdc-x4100s8'
vdev_tree
type='root'
id=0
guid=4724975198934143337
children[0]
type
Thank Erik, and I will try it, but the new question is that the root of the
NFS server mapped as nobody at the NFS client.
For this issue, I set up a new test NFS server and NFS client, and with the
same option, at this test environment, the file owner mapped correctly, it
confused me.
Thank
And I update the sharenfs option with "rw,ro...@100.198.100.0/24", it works
fine, and the NFS client can do the write without error.
Thanks.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mai
Hi All,
I had create a ZFS filesystem test and shared it with "zfs set
sharenfs=root=host1 test", and I checked the sharenfs option and it already
update to "root=host1":
bash-3.00# zfs get sharenfs test
-
and one thing confuse me is that parts of disk as OS devices and parts of MPxIO
device:
bash-3.00# zpool status
pool: tpool1
state: UNAVAIL
status: One or more devices could not be opened. There are insufficient
replicas for the pool to continue functioning.
action: Attach the missing
I have 1 host with Solaris 10 update 8 and it linked with stk6540 array(the
host type set to traffic manager), and host have 4 paths and 2 linked to the
controller A and the rest 2 linked to controller B, when I disabled the MPxIO
and the host reboot, then I checked the zpool status, the testpoo