[EMAIL PROTECTED] said: > but Marion's is not really possible at all, and won't be for a while with > other groups' choice of storage-consumer platform, so it'd have to be > GlusterFS or some other goofy fringe FUSEy thing or not-very-general crude > in-house hack.
Well, of course the magnitude of fringe factor is in the eye of the beholder. I didn't intend to make pNFS seem like a done deal. I don't quite yet think of OpenSolaris as a "done deal" either, still using Solaris-10 here in production, but since this is an OpenSolaris mailing list I should be more careful. Anyway, from looking over the wiki/blog info, apparently the sticking point with pNFS may be client-side availability -- there's only Linux and (Open)Solaris NFSv4.1 clients just yet. Still, pNFS claims to be backwards compatible with NFS v3 clients: If you point a traditional NFS client at the pNFS metadata server, the MDS is supposed to relay the data from the backend data servers. [EMAIL PROTECTED] said: > It's a shame that Lustre isn't available on Solaris yet either. Actually, that may not be so terribly fringey, either. Lustre and Sun's Scalable Storage product can make use of Thumpers: http://www.sun.com/software/products/lustre/ http://www.sun.com/servers/cr/scalablestorage/ Apparently it's possible to have a Solaris/ZFS data-server for Lustre backend storage: http://wiki.lustre.org/index.php?title=Lustre_OSS/MDS_with_ZFS_DMU I see they do not yet have anything other than Linux clients, so that's a limitation. But you can share out a Lustre filesystem over NFS, potentially from multiple Lustre clients. Maybe via CIFS/samba as well. Lastly, I've considered the idea of using Shared-QFS to glue together multiple Thumper-hosted ISCSI LUN's. You could add shared-QFS clients (acting as NFS/CIFS servers) if the client load needed more than one. Then SAM-FS would be a possibility for backup/replication. Anyway, I do feel that none of this stuff is quite "there" yet. But my experience with ZFS on fiberchannel SAN storage, that sinking feeling I've had when a little connectivity glitch resulted in a ZFS panic, makes me wonder if non-redundant ZFS on an ISCSI SAN is "there" yet, either. So far none of our lost-connection incidents resulted in pool corruption, but we have only 4TB or so. Restoring that much from tape is feasible, but even if Gray's 150TB of data can be recreated, it would take weeks to reload it. If it's decided that the clustered-filesystem solutions aren't feasible yet, the suggestion I've seen that I liked the best was Richard's, with a bad-boy server SAS-connected to multiple J4500's. But since Gray's project already has the X4500's, I guess they'd have to find another use for them (:-). Regards, Marion _______________________________________________ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss