We have over 10000 filesystems under /home in strongspace.com and it works 
fine. I forget but there was a bug or there was an improvement made around 
nevada build 32 (we're currently at 41) that made the initial mount on reboot 
significantly faster. Before that it was around 10-15 minutes. I wonder if that 
improvement didn't make it into sol10U2?

-Jason

Sent via BlackBerry from Cingular Wireless  

-----Original Message-----
From: eric kustarz <[EMAIL PROTECTED]>
Date: Tue, 27 Jun 2006 15:55:45 
To:Steve Bennett <[EMAIL PROTECTED]>
Cc:zfs-discuss@opensolaris.org
Subject: Re: [zfs-discuss] Supporting ~10K users on ZFS

Steve Bennett wrote:

>OK, I know that there's been some discussion on this before, but I'm not sure 
>that any specific advice came out of it. What would the advice be for 
>supporting a largish number of users (10,000 say) on a system that supports 
>ZFS? We currently use vxfs and assign a user quota, and backups are done via 
>Legato Networker.
>  
>

Using lots of filesystems is definitely encouraged - as long as doing so 
makes sense in your environment.

>>From what little I currently understand, the general advice would seem to be 
>>to assign a filesystem to each user, and to set a quota on that. I can see 
>>this being OK for small numbers of users (up to 1000 maybe), but I can also 
>>see it being a bit tedious for larger numbers than that.
>
>I just tried a quick test on Sol10u2:
>    for x in 0 1 2 3 4 5 6 7 8 9;  do for y in 0 1 2 3 4 5 6 7 8 9; do
>    zfs create testpool/$x$y; zfs set quota=1024k testpool/$x$y
>    done; done
>[apologies for the formatting - is there any way to preformat text on this 
>forum?]
>It ran OK for a minute or so, but then I got a slew of errors:
>    cannot mount '/testpool/38': unable to create mountpoint
>    filesystem successfully created, but not mounted
>
>So, OOTB there's a limit that I need to raise to support more than approx 40 
>filesystems (I know that this limit can be raised, I've not checked to see 
>exactly what I need to fix). It does beg the question of why there's a limit 
>like this when ZFS is encouraging use of large numbers of filesystems.
>  
>

There is no 40 filesystem limit.  You most likely had a pre-existing 
file/directory in testpool of the same name of the filesystem you tried 
to create.

fsh-hake# zfs list
NAME                   USED  AVAIL  REFER  MOUNTPOINT
testpool                77K  7.81G  24.5K  /testpool
fsh-hake# echo "hmm" > /testpool/01
fsh-hake# zfs create testpool/01
cannot mount 'testpool/01': Not a directory
filesystem successfully created, but not mounted
fsh-hake#

>If I have 10,000 filesystems, is the mount time going to be a problem?
>I tried:
>    for x in 0 1 2 3 4 5 6 7 8 9;  do for x in 0 1 2 3 4 5 6 7 8 9; do
>    zfs umount testpool/001; zfs mount testpool/001
>    done; done
>This took 12 seconds, which is OK until you scale it up - even if we assume 
>that mount and unmount take the same amount of time, so 100 mounts will take 6 
>seconds, this means that 10,000 mounts will take 5 minutes. Admittedly, this 
>is on a test system without fantastic performance, but there *will* be a much 
>larger delay on mounting a ZFS pool like this over a comparable UFS filesystem.
>  
>

So this really depends on why and when you're unmounting filesystems.  I 
suspect it won't matter much since you won't be unmounting/remounting 
your filesystems.

>I currently use Legato Networker, which (not unreasonably) backs up each 
>filesystem as a separate session - if I continue to use this I'm going to have 
>10,000 backup sessions on each tape backup. I'm not sure what kind of 
>challenges restoring this kind of beast will present.
>
>Others have already been through the problems with standard tools such as 'df' 
>becoming less useful.
>  
>

Is there a specific problem you had in mind regarding 'df;?

>One alternative is to ditch quotas altogether - but even though "disk is 
>cheap", it's not free, and regular backups take time (and tapes are not free 
>either!). In any case, 10,000 undergraduates really will be able to fill more 
>disks than we can afford to provision. We tried running a Windows fileserver 
>back in the days when it had no support for per-user quotas; we did some 
>ad-hockery that helped to keep track of the worst offenders (ableit after the 
>event), but what really killed us was the uncertainty over whether some idiot 
>would decide to fill all available space with "vital research data" (or junk, 
>depending on your point of view).
>
>I can see the huge benefits that ZFS quotas and reservations can bring, but I 
>can also see that there is a possibility that there are situations where ZFS 
>could be useful, but the lack of 'legacy' user-based quotas make it 
>impractical. If the ZFS developers really are not going to implement user 
>quotas is there any advice on what someone like me could do - at the moment 
>I'm presuming that I'll just have to leave ZFS alone.
>  
>

I wouldn't give up that easily... looks like 1 filesystem per user, and 
1 quota per filesystem does exactly what you want:
fsh-hake# zfs get -r -o name,value quota testpool
NAME             VALUE                     
testpool         none
testpool/ann     10M
testpool/bob     10M
testpool/john    10M
....
fsh-hake#

I'm assuming that you decided against 1 filesystem per user due to 
supposed 40 filesystem limit, which is isn't true.

eric


_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to