On Tue, 2006-06-27 at 23:07, Steve Bennett wrote: > >From what little I currently understand, the general advice would > seem to be to assign a filesystem to each user, and to set a quota > on that. I can see this being OK for small numbers of users (up to > 1000 maybe), but I can also see it being a bit tedious for larger > numbers than that.
I've seen this discussed; even recommended. I don't think, though - given that zfs has been available in a supported version of Solaris for about 24 hours or so - that we've yet got to the point of best practice or recommendation yet. That said, the idea of one filesystem per user does have its attractions. With zfs - unlike other filesystems - it's feasible. Whether it's sensible is another matter. Still, you could give them a zone each as well... (One snag is that for undergraduates, there isn't really an intermediate level - department or research grant, for example - that can be used as the allocation unit.) > I just tried a quick test on Sol10u2: > for x in 0 1 2 3 4 5 6 7 8 9; do for y in 0 1 2 3 4 5 6 7 8 9; do > zfs create testpool/$x$y; zfs set quota=1024k testpool/$x$y > done; done > [apologies for the formatting - is there any way to preformat text on this > forum?] > It ran OK for a minute or so, but then I got a slew of errors: > cannot mount '/testpool/38': unable to create mountpoint > filesystem successfully created, but not mounted > > So, OOTB there's a limit that I need to raise to support more than > approx 40 filesystems (I know that this limit can be raised, I've not > checked to see exactly what I need to fix). It does beg the question > of why there's a limit like this when ZFS is encouraging use of large > numbers of filesystems. Works fine for me. I've done this up to 16000 or so (not with current bits, that was last year). > If I have 10,000 filesystems, is the mount time going to be a problem? > I tried: > for x in 0 1 2 3 4 5 6 7 8 9; do for x in 0 1 2 3 4 5 6 7 8 9; do > zfs umount testpool/001; zfs mount testpool/001 > done; done > This took 12 seconds, which is OK until you scale it up - even if we assume > that mount and unmount take the same amount of time, It's not quite symmetric; I think umount is a fraction slower (it has to check if the filesystem is in use, amongst other things), but the estimate is probably accurate enough. > so 100 mounts will take 6 seconds, this means that 10,000 mounts > will take 5 minutes. Admittedly, this is on a test system without > fantastic performance, but there *will* be a much larger delay > on mounting a ZFS pool like this over a comparable UFS filesystem. My test last year got to 16000 filesystems on a 1G server before it went ballistic and all operations took infinitely long. I had clearly run out of physical memory. 5 minutes doesn't sound too bad to me. It's an order of magnitude quicker than it took to initialize ufs quotas before ufs logging was introduced. > One alternative is to ditch quotas altogether - but even though > "disk is cheap", it's not free, and regular backups take time > (and tapes are not free either!). In any case, 10,000 > undergraduates really will be able to fill more disks than > we can afford to provision. Last year, before my previous employer closed down, we switched off user disk quotas for 20,000 researchers. The world didn't end. The disks didn't fill up. All the work we had to do managing user quotas vanished. The number of calls to the helpdesk to sort out stupid problems due to applications running out of disk space plummeted down to zero. -- -Peter Tribble L.I.S., University of Hertfordshire - http://www.herts.ac.uk/ http://www.petertribble.co.uk/ - http://ptribble.blogspot.com/ _______________________________________________ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss