On Jan 26, 2011, at 19:48, Roy Sigurd Karlsbakk wrote:

> The scenario is as thus: We have a 50TB storage unit which was built to be an 
> archive, but lately, scientists have been using this for a fileserver for 
> modelling. Pracitaclly, this means 50+ processes doing more or less random 
> i/o to the server, which a 4-VDEV RAIDz2 system isn't very well suited to 
> handle. [...]
> 
> Anyone here that know such a system?

Isn't Lustre designed to handle these situations? You can have n-number of 
back-end file servers where I/O is distributed, but it all shows up in one 
namespace / mount-point. You can then have multiple mount-points with different 
I/O characteristics: /scratch is smaller, but on striped-mirrors (one set of 
server/s); /home or /data is larger, but on slower more space efficient RAID 
(possible another set of server/s, or a different FS on the same set as above).

        http://en.wikipedia.org/wiki/Lustre_(file_system)

Sun/Oracle has their SAM-QFS combination if you want hierarchical storage 
management:

        http://en.wikipedia.org/wiki/QFS

This isn't wholly a technical problem either: the scientists need to have a 
place to put 'scratch' work for a few days—and be encouraged to use it—and only 
then move the data to the archive area. They don't seem to have the resources 
available (welcome to academia), and so are making due with what is actually 
there.
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to