On 9/7/07, Alec Muffett <[EMAIL PROTECTED]> wrote:
> > The main bugbear is what the ZFS development team laughably call
> > quotas. They aren't quotas, they are merely filesystem size
> > restraints. To get around this the developers use the "let them eat
> > cake" mantra, "creating filesystems is easy" so create a new
> > filesystem for each user, with a "quota" on it. This is the ZFS way.

Having worked in academia and multiple Fortune 100's, the problem
seems to be most prevalent in academia, although possibly a minor
inconvenience in some engineering departments in industry.  In the
.edu where I used to manage the UNIX environment, I would have a tough
time weighing the complexities of quotas he mentions vs. the other
niceties.  My guess is that unless I had something that was really
broken, I would stay with UFS or VxFS waiting for a fix.

It appears as though the author has not yet tried out snapshots.  The
fact that space used by a snapshot for the sysadmin's convenience
counts against the user's quota is the real killer.  This would force
me into a disk to disk (rsync, because "zfs send | zfs recv" would
require snapshots to stay around for incrementals) backup + snapshot
scenario to be able to keep snapshots while minimizing their impact on
users.  That means double the disk space.  Doubling the quota is not
an option because without soft quotas there is no way to keep people
from using all of their space.  Frankly, that would be so much trouble
I would be better off using tape for restores, just like with UFS or
VxFS.

> > Now, with each user having a separate filesystem this breaks. The
> > automounter will mount the parent filesystem as before but all you
> > will see are the stub directories ready for the ZFS daughter
> > filesystems to mount onto and there's no way of consolidating the
> > ZFS filesystem tree into one NFS share or rules in automount map
> > files to be able to do sub-directory mounting.

While NFS4 holds some promise here, it is not a solution today.  It
won't be until all OS's that came out before 2008 are gone.  That will
be a while.

Use of macros (e.g. * server:/home/&) can go a long ways.  If that
doesn't do it, an executable map that does the appropriate munging may
be in order.

> > The problem here is one of legacy code, which you'll find
> > throughout the academic, and probably commercial world. Basically,
> > there's a lot of user generated code which has hard coded paths so
> > any new system has to replicate what has gone before. (The current
> > system here has automount map entries which map new disks to the
> > names of old disks on machines long gone, e.g. /home/eeyore_data/ )

Put such entries before the *  entry and things should be OK.

For me, quotas are likely to be a pain point that prevents me from
making good use of snapshots.  Getting changes in application teams'
understanding and behavior is just too much trouble.  Others are:

1. There seems to be no integration with backup tools that are
time+space+I/O efficient.  If my storage is on Netapp, I can use NDMP
to do incrementals between snapshots.  No such thing exists with ZFS.

2. Use of clones is out because I can't do a space-efficient restore.

3. ARC messes up my knowledge of how much RAM my machine is making
good use of.  After the first backup, vmstat says that I am just at
the brink of not having enough RAM that paging (file system and pager)
will begin soon.  This may be fine on a file server, but it really
messes with me if it is a J2EE server and I'm trying to figure out how
many more app servers I can add.

I have a lot of hopes for ZFS and have used it with success (and
failures) in limited scope.  I'm sure that with time the improvements
will come that make that scope increase dramatically, but for now it
is confined to the lab.  :(

Mike

-- 
Mike Gerdts
http://mgerdts.blogspot.com/
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to