On 9/20/07 3:49 PM, "Paul B. Henson" <[EMAIL PROTECTED]> wrote:

> On Thu, 20 Sep 2007, Richard Elling wrote:
> 
>> 50,000 directories aren't a problem, unless you also need 50,000 quotas
>> and hence 50,000 file systems.  Such a large, single storage pool system
>> will be an outlier... significantly beyond what we have real world
>> experience with.
> 
> Yes, considering that 45,000 of those users will be students, we definitely
> need separate quotas for each one :).
> 
> Hmm, I get a bit of a shiver down my spine at the prospect of deploying a
> critical central service in a relatively untested configuration 8-/. What
> is the maximum number of file systems in a given pool that has undergone
> some reasonable amount of real world deployment?

15,500 is the most I see in this article:

http://developers.sun.com/solaris/articles/nfs_zfs.html

Looks like its completely scalable but your boot time may suffer the more
you have. Just don't reboot :)

> 
> One issue I have is that our previous filesystem, DFS, completely spoiled
> me with its global namespace and location transparency. We had three fairly
> large servers, with the content evenly dispersed among them, but from the
> perspective of the client any user's files were available at
> /dfs/user/<username>, regardless of which physical server they resided on.
> We could even move them around between servers transparently.

If it was so great why did IBM kill it?  Did they have an alternative with
the same functionality?

> 
> Unfortunately, there aren't really any filesystems available with similar
> features and enterprise applicability. OpenAFS comes closest, we've been
> prototyping that but the lack of per file ACLs bites, and as an add-on
> product we've had issues with kernel compatibility across upgrades.
> 
> I was hoping to replicate a similar feel by just having one large file
> server with all the data on it. If I split our user files across multiple
> servers, we would have to worry about which server contained what files,
> which would be rather annoying.
> 
> There are some features in NFSv4 that seem like they might someday help
> resolve this problem, but I don't think they are readily available in
> servers and definitely not in the common client.
> 
>>> I was planning to provide CIFS services via Samba. I noticed a posting a
>>> while back from a Sun engineer working on integrating NFSv4/ZFS ACL support
>>> into Samba, but I'm not sure if that was ever completed and shipped either
>>> in the Sun version or pending inclusion in the official version, does
>>> anyone happen to have an update on that? Also, I saw a patch proposing a
>>> different implementation of shadow copies that better supported ZFS
>>> snapshots, any thoughts on that would also be appreciated.
>> 
>> This work is done and, AFAIK, has been integrated into S10 8/07.
> 
> Excellent. I did a little further research myself on the Samba mailing
> lists, and it looks like ZFS ACL support was merged into the official
> 3.0.26 release. Unfortunately, the patch to improve shadow copy performance
> on top of ZFS still appears to be floating around the technical mailing
> list under discussion.
> 
>>> Is there any facility for managing ZFS remotely? We have a central identity
>>> management system that automatically provisions resources as necessary for
> [...]
>> This is a loaded question.  There is a webconsole interface to ZFS which can
>> be run from most browsers.  But I think you'll find that the CLI is easier
>> for remote management.
> 
> Perhaps I should have been more clear -- a remote facility available via
> programmatic access, not manual user direct access. If I wanted to do
> something myself, I would absolutely login to the system and use the CLI.
> However, the question was regarding an automated process. For example, our
> Perl-based identity management system might create a user in the middle of
> the night based on the appearance in our authoritative database of that
> user's identity, and need to create a ZFS filesystem and quota for that
> user. So, I need to be able to manipulate ZFS remotely via a programmatic
> API.
>
>> Active/passive only.  ZFS is not supported over pxfs and ZFS cannot be
>> mounted simultaneously from two different nodes.
> 
> That's what I thought, I'll have to get back to that SE. Makes me wonder as
> to the reliability of his other answers :).
> 
>> For most large file servers, people will split the file systems across
>> servers such that under normal circumstances, both nodes are providing
>> file service.  This implies two or more storage pools.
> 
> Again though, that would imply two different storage locations visible to
> the clients? I'd really rather avoid that. For example, with our current
> Samba implementation, a user can just connect to
> '\\files.csupomona.edu\<username>' to access their home directory or
> '\\files.csupomona.edu\<groupname>' to access a shared group directory.
> They don't need to worry on which physical server it resides or determine
> what server name to connect to.
> 
>> The SE is mistaken.  Sun^H^Holaris Cluster supports a wide variety of
>> JBOD and RAID array solutions.  For ZFS, I recommend a configuration
>> which allows ZFS to repair corrupted data.
> 
> That would also be my preference, but if I were forced to use hardware
> RAID, the additional loss of storage for ZFS redundancy would be painful.
> 
> Would anyone happen to have any good recommendations for an enterprise
> scale storage subsystem suitable for ZFS deployment? If I recall correctly,
> the SE we spoke with recommended the StorageTek 6140 in a hardware raid
> configuration, and evidently mistakenly claimed that Cluster would not work
> with JBOD.

I really have to disagree, we have 6120 and 6130's and if I had the option
to actually plan out some storage I would have just bought a thumper.  You
could probably buy 2 for the cost of that 6140.

> 
> Thanks...
> 

-Andy Lubel
-- 


_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to