-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Matthew Ahrens wrote:
| I believe this is because sharemgr does an O(number of shares) operation
| whenever you try to share/unshare anything (retrieving the list of shares
| from the kernel to make sure that it isn't/is already shared). I
couldn't
|
Ok... So I was wrong. I was informed I had this backwards. It seems that this
NFS4.1 mirror mounts thing is really only nice for getting rid of a lot of
automount maps. You still have to share each filesystem :-( I hate it when I
think there is hope just to have it taken away. Sigh...
T
On Mon, 28 Jan 2008, Chris wrote:
> I did a little bit more digging and found some interesting things. NFS4
> Mirror mounts. This would seem to be the most logical option. In this
> scenario the client would connect to a single mount /tank/users but would be
> able to move through the indiv
I did a little bit more digging and found some interesting things. NFS4 Mirror
mounts. This would seem to be the most logical option. In this scenario the
client would connect to a single mount /tank/users but would be able to move
through the individual user file systems underneath that moun
[EMAIL PROTECTED] wrote on 01/28/2008 09:11:53 AM:
> I too am having the same issues. I started out using Solaris 10
> 8/07 release. I could create all the filesystems, 47,000
> filesystems, but if you needed to reboot, patch, shutdown Very
> bad. So then I read about sharemgr and how it w
I too am having the same issues. I started out using Solaris 10 8/07 release.
I could create all the filesystems, 47,000 filesystems, but if you needed to
reboot, patch, shutdown Very bad. So then I read about sharemgr and how
it was supposed to mitigate these issues. Well, after runnin
> New, yes. Aware - probably not.
>
> Given cheap filesystems, users would create "many"
> filesystems was an easy guess, but I somehow don't
> think anybody envisioned that users would be creating
> tens of thousands of filesystems.
>
> ZFS - too good for it's own good :-p
IMO (and given mails/
New, yes. Aware - probably not.
Given cheap filesystems, users would create "many" filesystems was an easy
guess, but I somehow don't think anybody envisioned that users would be
creating tens of thousands of filesystems.
ZFS - too good for it's own good :-p
This message posted from opensol
I believe this is because sharemgr does an O(number of shares) operation
whenever you try to share/unshare anything (retrieving the list of shares
from the kernel to make sure that it isn't/is already shared). I couldn't
find a bug on this (though it's been known for some time), so feel free to
On Wed, Jan 23, 2008 at 08:02:22AM -0800, Akhilesh Mritunjai wrote:
> I remember reading a discussion where these kind of problems were
> discussed.
>
> Basically it boils down to "everything" not being aware of the
> radical changes in "filesystems" concept.
>
> All these things are being worked
I remember reading a discussion where these kind of problems were discussed.
Basically it boils down to "everything" not being aware of the radical changes
in "filesystems" concept.
All these things are being worked on, but it might take sometime before
everything is made aware that yes it's no
Anyone out there using sharenfs=on with a large amount
of filesystems? We have over 1 filesystems all in one
pool. Everything is great until we turn on sharenfs
(zfs set sharenfs=on poolName). Once that is enabled,
zfs create poolName/filesystem takes about 5 minutes to
complete. If nfs shari
12 matches
Mail list logo