Following my previous post across several mailing lists regarding multi-tera 
volumes with small files on them, I'd be glad if people could share real life 
numbers on large filesystems and their experience with them. I'm slowly coming 
to a realization that regardless of theoretical filesystem capabilities (1TB, 
32TB, 256TB or more), more or less across the enterprise filesystem arena 
people are recommending to keep practical filesystems up to 1TB in size, for 
manageability and recoverability.

What's the maximum filesystem size you've used in production environment? How 
did the experience come out?

I'm currently using 4 TB partitions with vxfs. When hosted on FreeBSD
I was limited to 2 TB but using UFS2/FreeBSD was impractical for
several reasons. With vxfs 4 TB is a practical limit, when files are
stored on the volume we take incremental backup every night, and this
requires approx. 16-17 LTO-3-.tapes. When a partition is filled up we
perform a complete backup which requires approx. 12 LTO-3 tape. Our
tape-station is a Dell PV136T with  3x18 slots. Increasing a partition
to 5 TB would require more tapes and I don't have any plans on
becoming a tape-dj :-)

If I did use zfs I would probably make the partitons the same size but
still make the (z)pool rather large.

regards
Claus
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to