[zfs-discuss] zfs allow does not work for rpool

2010-07-28 Thread Mike DeMarco
I am trying to give a general user permissions to create zfs filesystems in the 
rpool.

zpool set=delegation=on rpool
zfs allow  create rpool

both run without any issues.

zfs allow rpool reports the user does have create permissions.

zfs create rpool/test
cannot create rpool/test : permission denied.

Can you not allow to the rpool?
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs allow does not work for rpool

2010-07-28 Thread Mike DeMarco
Thanks adding mount did allow me to create it but does not allow me to create 
the mountpoint.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs allow does not work for rpool

2010-07-28 Thread Mike DeMarco
That looks like that will work. Won't be able to test until late tonight.


Thanks
mike
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] EMC migration and zfs

2010-08-12 Thread Mike DeMarco
We are going to be migrating to a new EMC frame using Open Replicator. 
ZFS is sitting on volumes that are running MPXIO. So the controller number/disk 
number is going to change when we reboot the server. I would like to konw if 
anyone has done this and will the zfs filesystems "just work" and find the new 
disk id numbers when we go to zfs import the pool.

Our process would be:
zfs export any and all pools on the server
shutdown the server
re-zone the storage to the new EMC frame.
EMC on the backend will present the old drives through the new frame/drives 
using Open Replicator.
boot the server to single user mode
zfs import the pools
reboot the server.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] EMC migration and zfs

2010-08-16 Thread Mike DeMarco
Bump this up. Anyone?
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] New Supermicro SAS/SATA controller: AOC-USAS2-L8e in SOHO NAS and HD HTPC

2010-08-16 Thread Mike DeMarco
What I would really like to know is why do pci-e raid controller cards cost 
more than an entire motherboard with processor. Some cards can cost over $1,000 
dollars, for what.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] zfs disk encryption

2009-10-13 Thread Mike DeMarco
Does anyone know when this will be available? Project says Q4 2009 but does not 
give a build.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] zfs on FDE

2009-10-13 Thread Mike DeMarco
Any reason why ZFS would not work on a FDE (Full Data Encryption) Hard drive?
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] shrink zpool

2010-08-25 Thread Mike DeMarco
Is it currently or near future possible to shrink a zpool "remove a disk"
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] recordsize

2010-09-16 Thread Mike DeMarco
What are the ramifications to changing the recordsize of a zfs filesystem that 
already has data on it?

I want to tune down the recordsize to speed up very small reads to a size that 
is more in line with the read size. can I do this on a filestystem that has 
data already on it and how does it effect that data? zpool consists of 8 SANs 
Luns.

Thanks
mike
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] zpool import crashes system

2010-11-11 Thread Mike DeMarco
I am trying to bring in my zpool from build 121 into build 134 and every time I 
do a zpool import the system crashes.
  I have read other posts for this and have tried setting zfs_recover = 1 and 
aok = 1 in /etc/system I have used mdb to verify that they are in the kernel 
but the system still crashes as soon as import is called.
  On this system I can rebuild the entire pool from scratch but my next system 
is 4Tbytes and I don't have space on any other system to store that much data.
  Anyone have a way to import and upgrade a older pool to a newer OS?

TIA mic
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] HELP!!! Need to disable zfs

2008-11-25 Thread Mike DeMarco
My root drive is ufs. I have corrupted my zpool which is on a different drive 
than the root drive.
My system paniced and now it core dumps when it boots up and hits zfs start. I 
have a alt root drive that  can boot the system up with but how can I disable 
zfs from starting on a different drive?

HELP HELP HELP
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] HELP!!! Need to disable zfs

2008-11-25 Thread Mike DeMarco
> Boot from the other root drive, mount up the "bad"
> one at /mnt.  Then:
> 
> # mv /mnt/etc/zfs/zpool.cache
> /mnt/etc/zpool.cache.bad
> 
> 
> 
> On Tue, Nov 25, 2008 at 8:18 AM, Mike DeMarco
> <[EMAIL PROTECTED]> wrote:
> > My root drive is ufs. I have corrupted my zpool
> which is on a different drive than the root drive.
> > My system paniced and now it core dumps when it
> boots up and hits zfs start. I have a alt root drive
> that  can boot the system up with but how can I
> disable zfs from starting on a different drive?
> >
> > HELP HELP HELP
> > --
> > This message posted from opensolaris.org
> > ___
> > zfs-discuss mailing list
> > zfs-discuss@opensolaris.org
> >
> http://mail.opensolaris.org/mailman/listinfo/zfs-discu
> ss
> >
> 
> 
> 
> -- 
> Mike Gerdts
> http://mgerdts.blogspot.com/
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discu
> ss

That got it. Thanks
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] compression=on and zpool attach

2007-09-11 Thread Mike DeMarco
> I've got 12Gb or so of db+web in a zone on a ZFS
> filesystem on a mirrored zpool.
> Noticed during some performance testing today that
> its i/o bound but
> using hardly
> any CPU, so I thought turning on compression would be
> a quick win.

If it is io bound won't compression make it worse? 

> 
> I know I'll have to copy files for existing data to
> be compressed, so
> I was going to
> make a new filesystem, enable compression and rysnc
> everything in, then drop the
> old filesystem and mount the new one (with compressed
> blocks) in its place.
> 
> But I'm going to be hooking in faster LUNs later this
> week. The plan
> was to remove
> half of the mirror, attach a new disk, remove the
> last old disk and
> attach the second
> half of the mirror (again on a faster disk).
> 
> Will this do the same job? i.e. will I see the
> benefit of compression
> on the blocks
> that are copied by the mirror being resilvered?

No! Since you are doing a block for block mirror of the data, this would not 
could not compress the data.

> 
> 
> -- 
> Rasputin :: Jack of All Trades - Master of Nuns
> http://number9.hellooperator.net/
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discu
> ss
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] compression=on and zpool attach

2007-09-12 Thread Mike DeMarco
> On 11/09/2007, Mike DeMarco <[EMAIL PROTECTED]>
> wrote:
> > > I've got 12Gb or so of db+web in a zone on a ZFS
> > > filesystem on a mirrored zpool.
> > > Noticed during some performance testing today
> that
> > > its i/o bound but
> > > using hardly
> > > any CPU, so I thought turning on compression
> would be
> > > a quick win.
> >
> > If it is io bound won't compression make it worse?
> 
> Well, the CPUs are sat twiddling their thumbs.
> I thought reducing the amount of data going to disk
> might help I/O -
> is that unlikely?

IO bottle necks are usually caused by a slow disk or one that has heavy 
workloads reading many small files. Two factors that need to be considered are 
Head seek latency and spin latency. Head seek latency is the amount of time it 
takes for the head to move to the track that is to be written, this is a 
eternity for the system (usually around 4 or 5 milliseconds). Spin latency is 
the amount of time it takes for the spindle to spin the track to be read or 
written over the head. Ideally you only want to pay the latency penalty once. 
If you have large reads and writes going to the disk then compression may help 
a little but if you have many small reads or writes it will do nothing more 
than to burden your CPU with a no gain amount of work to do since your are 
going to be paying Mr latency for each read or write.

Striping several disks together with a stripe width that is tuned for your data 
model is how you could get your performance up. Stripping has been left out of 
the ZFS model for some reason. Where it is true that RAIDZ will stripe the data 
across a given drive set it does not give you the option to tune the stripe 
width. Do to the write performance problems of RAIDZ you may not get a 
performance boost from it stripping if your write to read ratio is too high 
since the driver has to calculate parity for each write.

> 
> > > benefit of compression
> > > on the blocks
> > > that are copied by the mirror being resilvered?
> >
> > No! Since you are doing a block for block mirror of
> the data, this would not could not compress the data.
> 
> No problem, another job for rsync then :)
> 
> 
> -- 
> Rasputin :: Jack of All Trades - Master of Nuns
> http://number9.hellooperator.net/
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discu
> ss
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] compression=on and zpool attach

2007-09-12 Thread Mike DeMarco
> On 9/12/07, Mike DeMarco <[EMAIL PROTECTED]> wrote:
> 
> > Striping several disks together with a stripe width
> that is tuned for your data
> > model is how you could get your performance up.
> Stripping has been left out
> > of the ZFS model for some reason. Where it is true
> that RAIDZ will stripe
> > the data across a given drive set it does not give
> you the option to tune the
> > stripe width. Do to the write performance problems
> of RAIDZ you may not
> > get a performance boost from it stripping if your
> write to read ratio is too
> > high since the driver has to calculate parity for
> each write.
> 
> I am not sure why you think striping has been left
> out of the ZFS
> model. If you create a ZFS pool without the "raidz"
> or "mirror"
> keywords, the pool will be striped. Also, the
> "recordsize" tunable can
> be useful for matching up application I/O to physical
> I/O.
> 
> Thanks,
> - Ryan
> -- 
> UNIX Administrator
> http://prefetch.net
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discu
> ss

Oh... How right you are. I dug into the PDFs and read up on Dynamic striping. 
My bad.
ZFS rocks.
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Hierarchal zfs mounts

2007-10-22 Thread Mike DeMarco
Looking for a way to mount a zfs filesystem ontop of another zfs filesystem 
without resorting to legacy mode.
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Hierarchal zfs mounts

2007-10-22 Thread Mike DeMarco
> Mike DeMarco wrote:
> > Looking for a way to mount a zfs filesystem ontop
> of another zfs
> > filesystem without resorting to legacy mode.
> 
> doesn't simply 'zfs set mountpoint=...' work for you?
> 
> -- 
> Michael Schuster
> Recursion, n.: see 'Recursion'
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discu
> ss

well if you create let say a local/apps and a local/apps-bin then 
zfs set mountpoint=/apps local/apps
zfs set mountpoint=/apps/bin  local/apps-bin

now if you reboot the system there is no mechanism to tell zfs to mount /apps 
first and /apps/bin second so you could get /apps/bin mounted first and then 
/apps either will mount overtop or wont mount.
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] zfs! mirror and break

2008-05-08 Thread Mike DeMarco
I currently have a zpool with two 8Gbyte disks in it. I need to replace them 
with a single 56Gbyte disk.

with veritas I would just add the disk in as a mirror and break off the other 
plex then destroy it.

I see no way of being able to do this with zfs.

Being able to migrate data without having to unmount and remount filesystems is 
very 
important to me.

Can anyone say when such functionality will be implemented?
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs! mirror and break

2008-05-09 Thread Mike DeMarco
> Mike DeMarco wrote:
> > I currently have a zpool with two 8Gbyte disks in
> it. I need to replace them with a single 56Gbyte
> disk.
> >
> > with veritas I would just add the disk in as a
> mirror and break off the other plex then destroy it.
> >
> > I see no way of being able to do this with zfs.
> >
> > Being able to migrate data without having to
> unmount and remount filesystems is very 
> > important to me.
> >
> > Can anyone say when such functionality will be
> implemented?
> >   
> 
> If the original pool is a mirror, the it is trivial
> and has been a
> features since day one.  zpool attach the new disk.
> zpool detach the old disks.
> 
> If the original pool is not a mirror, then it can get
> more
> complicated, but depends on what you want it to look
> like in the long term...
>  -- richard
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discu
> ss

original pool is not a mirror it consists of two disks concat. (SANS disks 
mirrored on SANS)
Looks like attach will only allow attacment to a single device in a zpool and 
not to both devices. If I attach to one of the two devices it will grow the 
pool into the whole device and then I can not remove the other device. I find 
this to be a most frustrating problem with zfs and wonder if any work is being 
done to correct such a omission.
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss