i
On Fri, Oct 15, 2010 at 10:06 PM, Edward Ned Harvey wrote:
> > From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> > boun...@opensolaris.org] On Behalf Of Cassandra Pugh
> >
> > I would like to know how to replace a failed vdev in a non redundant
> > poo
Hello,
I would like to know how to replace a failed vdev in a non redundant pool?
I am using fiber attached disks, and cannot simply place the disk back into
the machine, since it is virtual.
I have the latest kernel from sept 2010 that includes all of the new ZFS
upgrades.
Please, can you help
I tried zfs replace, however the new drive is slightly smaller, and even
with a -f, it refuses to replace the drive.
I guess i will have to export the pool and destroy this one to get my drives
back.
Still would like the ability to shrink a pool.
-
Cassandra
(609) 243-2413
Unix Administrator
"F
The pool is not redundant, so I would suppose, yes, is is Raid-1 on the
software level.
I have a few drives, which are on a specific array, which I would like to
remove from this pool.
I have discovered the "replace" command, and I am going to try and replace,
1 for 1, the drives I would like to
Hello list,
This has probably been discussed, however I would like to bring it up again,
so that the powers that be, know someone else is looking for this feature.
I would like to be able to shrink a pool and remove a non-redundant disk.
Is this something that is in the works?
It would be fanta
0 AM, Pasi Kärkkäinen wrote:
> On Fri, Jun 04, 2010 at 08:43:32AM -0400, Cassandra Pugh wrote:
> >Thank you, when I manually mount using the "mount -t nfs4" option, I
> am
> >able to see the entire tree, however, the permissions are set as
> >nfsnobody
3
Unix Administrator
"From a little spark may burst a mighty flame."
-Dante Alighieri
On Thu, Jun 3, 2010 at 4:33 PM, Brandon High wrote:
> On Thu, Jun 3, 2010 at 12:50 PM, Cassandra Pugh wrote:
> > The special case here is that I am trying to traverse NESTED zfs systems,
&
I am trying to set this up as an automount.
Currently I am trying to set mounts for each area, but I have a lot to
mount.
When I run showmount -e nfs_server I do see all of the shared directories.
-
Cassandra
(609) 243-2413
Unix Administrator
"From a little spark may burst a mighty flame."
-Da
correctly until the above two issues are
> cleared.
>
> - You might be able to rule out the Linux client support of nested
> mount points by just sharing a simple test dataset, like this:
>
> # zfs create mypool/test
> # cp /usr/dict/words /mypool/test/file.1
> # zfs se
file.1 myfs2
> # ls /net/t2k-brm-03/pool/myfs1/myfs2
> file.2
> # mount -F nfs t2k-brm-03:/pool/myfs1 /mnt
> # ls /mnt
> file.1 myfs2
> # ls /mnt/myfs2
> file.2
>
> On the server:
>
> # touch /pool/myfs1/myfs2/file.3
>
> On the client:
>
> # ls /mnt/myf
I was wondering if there is a special option to share out a set of
nested
directories? Currently if I share out a directory with
/pool/mydir1/mydir2
on a system, mydir1 shows up, and I can see mydir2, but nothing in
mydir2.
mydir1 and mydir2 are each a zfs filesystem, each shared with
11 matches
Mail list logo