Paul B. Henson wrote:
> We have been evaluating ZFS as a potential solution for delivering
> enterprise file services for our campus. I've posted a couple of times with
> various questions, but to recap we want to provide file space to our
> approximately 22000 students and 2400 faculty/staff, as well as group
> project space for about 1000 groups. Access will be via secure NFSv4 for
> our UNIX systems, and CIFS via samba for our windows/macosx clients (the
> in-kernel SMB server is not currently an option as we require official
> support).
>   

N.B. anyone can purchase a Production Subscription for OpenSolaris
which would get both "support" and the in-kernel CIFS server.
http://www.sun.com/service/opensolaris/index.jsp

<sidebar>
At USC, we have a deal with Google whereby we use Google apps
and gmail, so if you send e-mail to me @usc.edu, then I get it as a
gmail service.  The interesting bit is that it uses USC's single sign on
infrastructure, not Google's.
</sidebar>

> We have almost completed a functional prototype (we're just waiting for an
> IDR for ACL inheritance so we can complete testing), and are currently
> considering deploying x4500 servers. We're thinking about 5, with
> approximately 6000 ZFS filesystems each (Solaris 10U5 still has scalability
> issues, any more than about 5-6 thousand filesystems results in
> unacceptably long boot cycles).
>
> I was thinking about allocating 2 drives for the OS (SVM mirroring, pending
> ZFS boot support), two hot spares, and allocating the other 44 drives as
> mirror pairs into a single pool. While this will result in lower available
> space than raidz, my understanding is that it should provide much better
> performance. Is there anything potentially problematic about this
> configuration? Low-level disk performance analysis is not really my field,
> I tend to live a bit higher up in the abstraction layer. I don't think
> there would be any performance issues with this, but would much appreciate
> commentary from the experts.
>   

That is what I would do.

> Has there been a final resolution on the x4500 I/O hanging issue? I think I
> saw a thread the other day about an IDR that seems promising to fix it, if
> we go this route hopefully that will be resolved before we go production.
>
> It seems like kind of a waste to allocate 1TB to the operating system,
> would there be any issue in taking a slice of those boot disks and creating
> a zfs mirror with them to add to the pool?
>   

This is also what I would do.

> I'm planning on using snapshots for online backups, maintaining perhaps 10
> days worth. At 6000 filesystems, that would be 60000 snapshots floating
> around, any potential scalability or performance issues with that?
>   

I don't think we have much data for this size of a production system.
OTOH, I would expect that only a small subset of the space will be
active.

> Any other suggestions or pointing out of potential problems would be
> greatly appreciated. So far, ZFS looks like the best available solution
> (even better if S10U6 comes out before we go production :) ), thanks to all
> of the Sun guys for their great work on that...
>
>   

Long-term backup is more difficult.  Is there an SLA, or do you need to
treat faculty/staff different from undergrads or grad students?
 -- richard

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to