In the Eat-Your-Own-Dogfood mode:
Here in CSG at Sun (which is mainly all Java-related things):
Steffen Weiberle wrote:
I am trying to compile some deployment scenarios of ZFS.
If you are running ZFS in production, would you be willing to provide
(publicly or privately)?
# of systems
All our central file servers plus the various Source Code repositories
(of particular note: http://hg.openjdk.java.net which holds all of the
OpenJDK and related source code). Approximately 20 major machines, plus
several dozen smaller ones. And that's only what I know about (maybe
about 50% of the total organization).
amount of storage
45TB+ just on the fileservers and Source Code repos
application profile(s)
NFS servers, Mercurial Source Code Repositories, Teamware Source Code
Repositories, Lightweight web databases (db, postgresql, MySql), Web
twikis, flat file data profile storage, Virtualized Host centralized
storage. Starting with roll-your-own VTLs.
type of workload (low, high; random, sequential; read-only,
read-write, write-only)
NFS servers: high load (100s of clients per server), random read &
write, mostly small files.
Hg & TW source code repos: low load (only on putbacks), huge amounts of
small file read/writes (i.e. mostly random)
Testing apps: mostly mid-size sequential writes
VTL (disk backups): high load streaming writes almost exclusively.
xVM systems: moderate to high load, heavy random read, modest random write.
storage type(s)
Almost exclusively FC-attached SAN. Small amounts of dedicated FC
arrays (STK2540 / STK6140), and the odd iSCSI thing here and there. NFS
servers are pretty much all T2000. Source Code repos are X4000-series
Opteron systems (usually X4200, X4140, or X4240). Thumpers (X4500) are
scattered around, and the rest is a total mishmash of both Sun and others.
industry
Software development
whether it is private or I can share in a summary
I can't see any reason not to summarize.
anything else that might be of interest
right now, we're not using SSDs hardly at all, and we unfortunately
haven't done much with the Amber Road storage devices (7000-series).
Our new interest is the Thumper/Thor (x4500 / x4540 ) machines being
used as a disk backup device: we're moving our backups to disk (i.e.
client backup goes to disk first, then to tape as needed). This is
possible due to ZFS. We're replacing virtually all our VxFS systems
with ZFS.
Also, the primary development build/test system depends heavily on ZFS
for storage, and will lean even more on it as we convert to xVM-based
virtualization. I plan on using snapshots to radically reduce disk space
required by multiple identical clients, and to make adding and retiring
clients simpler. In the case of our Windows clients, I expect ZFS
snapshotting to enable me to automaticlly wipe the virtual client after
every test run. Which is really nice, considering the flakiness that
testing on Windows causes.
Thanks in advance!!
Steffen
--
Erik Trimble
Java System Support
Mailstop: usca22-123
Phone: x17195
Santa Clara, CA
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss