> Roch - PAE wrote:
> The hard part is getting a set of simple requirements. As you go into
> more complex data center environments you get hit with older Solaris
> revs, other OSs, SOX compliance issues, etc. etc. etc. The world where
> most of us seem to be playing with ZFS is on the lower end of the
> complexity scale.
I've been watching this thread and unfortunately fit this model. I'd hoped that
ZFS might scale enough to solve my problem but you seem to be saying that it's
mostly untested in large scale environments.
About 7 years ago we ran out of inodes on our UFS file systems. We used bFile
as middleware for a while to distribute the files across multiple disks and
then switched to VFS on SAN about 5 years ago. Distribution across file systems
and inode depletion continued to be a problem so we switched middleware to
another vendor that essentially compresses about 200 files into a single 10Mb
archive and uses a DB to find the file within the archive on the correct disk.
Expensive, complex and slow but effective solution until the latest license
renewal when we got hit with a huge bill.
I'd love to go back to a pure file system model and looked at Reiser4, JFS,
NTFS and now ZFS for a way to support over 100 million small documents and
16Tb. We average 2 file reads and 1 file write per second 24/7 with expected
growth to 24Tb. I'd be willing to scrap everything we have to find a
non-proprietary long term solution.
ZFS looked like it might provide an answer. Are you saying it's not really
suitable for this type of application?
This message posted from opensolaris.org
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss