UNIX admin wrote:
[Solaris 10 6/06 i86pc]
Shortly thereafter, I ran out of space on my "space" pool, but `zfs
list` kept reporting I still had about a GigaByte worth of free
space, while `zpool status` seemed to correctly report I ran out of
space.
Please send us the output of 'zpool status
Hi,
On 9/4/06, UNIX admin <[EMAIL PROTECTED]> wrote:
[Solaris 10 6/06 i86pc]
...
Then I added two more disks to the pool with the `zpool add -fn space c2t10d0
c2t11d0`, whereby I determined that those would be added as a RAID0, which is
not what I wanted. `zpool add -f raidz c2t10d0 c2t11d0`
On 9/4/06, UNIX admin <[EMAIL PROTECTED]> wrote:
[Solaris 10 6/06 i86pc]
I recently used a set of 6 disks in a MultiPack to create a RAIDZ volume. Then I
proceeded to do zfs set sharenfs=root=a.b.c.d:a.b.c.e space ("space" is how I
named the ZFS pool).
Is this really how you set the sharenf
[Solaris 10 6/06 i86pc]
I recently used a set of 6 disks in a MultiPack to create a RAIDZ volume. Then
I proceeded to do zfs set sharenfs=root=a.b.c.d:a.b.c.e space ("space" is how I
named the ZFS pool).
Then I NFS mounted the ZFS pool on another system, and proceeded to do a find +
cpio -pvd