I am trying to give a general user permissions to create zfs filesystems in the
rpool.
zpool set=delegation=on rpool
zfs allow create rpool
both run without any issues.
zfs allow rpool reports the user does have create permissions.
zfs create rpool/test
cannot create rpool/test : permission d
Thanks adding mount did allow me to create it but does not allow me to create
the mountpoint.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
That looks like that will work. Won't be able to test until late tonight.
Thanks
mike
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
We are going to be migrating to a new EMC frame using Open Replicator.
ZFS is sitting on volumes that are running MPXIO. So the controller number/disk
number is going to change when we reboot the server. I would like to konw if
anyone has done this and will the zfs filesystems "just work" and fi
Bump this up. Anyone?
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
What I would really like to know is why do pci-e raid controller cards cost
more than an entire motherboard with processor. Some cards can cost over $1,000
dollars, for what.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-
Does anyone know when this will be available? Project says Q4 2009 but does not
give a build.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Any reason why ZFS would not work on a FDE (Full Data Encryption) Hard drive?
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Is it currently or near future possible to shrink a zpool "remove a disk"
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
What are the ramifications to changing the recordsize of a zfs filesystem that
already has data on it?
I want to tune down the recordsize to speed up very small reads to a size that
is more in line with the read size. can I do this on a filestystem that has
data already on it and how does it ef
I am trying to bring in my zpool from build 121 into build 134 and every time I
do a zpool import the system crashes.
I have read other posts for this and have tried setting zfs_recover = 1 and
aok = 1 in /etc/system I have used mdb to verify that they are in the kernel
but the system still cr
My root drive is ufs. I have corrupted my zpool which is on a different drive
than the root drive.
My system paniced and now it core dumps when it boots up and hits zfs start. I
have a alt root drive that can boot the system up with but how can I disable
zfs from starting on a different drive?
> Boot from the other root drive, mount up the "bad"
> one at /mnt. Then:
>
> # mv /mnt/etc/zfs/zpool.cache
> /mnt/etc/zpool.cache.bad
>
>
>
> On Tue, Nov 25, 2008 at 8:18 AM, Mike DeMarco
> <[EMAIL PROTECTED]> wrote:
> > My root dri
> I've got 12Gb or so of db+web in a zone on a ZFS
> filesystem on a mirrored zpool.
> Noticed during some performance testing today that
> its i/o bound but
> using hardly
> any CPU, so I thought turning on compression would be
> a quick win.
If it is io bound won't compression make it worse?
>
> On 11/09/2007, Mike DeMarco <[EMAIL PROTECTED]>
> wrote:
> > > I've got 12Gb or so of db+web in a zone on a ZFS
> > > filesystem on a mirrored zpool.
> > > Noticed during some performance testing today
> that
> > > its i/o bound but
&
> On 9/12/07, Mike DeMarco <[EMAIL PROTECTED]> wrote:
>
> > Striping several disks together with a stripe width
> that is tuned for your data
> > model is how you could get your performance up.
> Stripping has been left out
> > of the ZFS model for some reason
Looking for a way to mount a zfs filesystem ontop of another zfs filesystem
without resorting to legacy mode.
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listin
> Mike DeMarco wrote:
> > Looking for a way to mount a zfs filesystem ontop
> of another zfs
> > filesystem without resorting to legacy mode.
>
> doesn't simply 'zfs set mountpoint=...' work for you?
>
> --
>
I currently have a zpool with two 8Gbyte disks in it. I need to replace them
with a single 56Gbyte disk.
with veritas I would just add the disk in as a mirror and break off the other
plex then destroy it.
I see no way of being able to do this with zfs.
Being able to migrate data without having
> Mike DeMarco wrote:
> > I currently have a zpool with two 8Gbyte disks in
> it. I need to replace them with a single 56Gbyte
> disk.
> >
> > with veritas I would just add the disk in as a
> mirror and break off the other plex then destroy it.
> >
> >
20 matches
Mail list logo