Anders,

Have you considered something like the following:
http://www.newegg.com/Product/Product.asp?Item=N82E16816133001

I realize you're having issues sticking more HDD's internally, this
should solve that issue.  Running iSCSI volumes is going to get real
ugly in a big hurry and I strongly suggest you do NOT go that route.  

Your best bet (to do things on the cheap) would be to have two servers,
one directly connected to the storage, the other with the esata cards
installed and waiting.  Assuming you can deal with *some* downtime, you
simply move the cables from the one head to the other, import your pool,
and continue along.

This should provide more than enough storage for a while.  It's 5.5 TB
per array with 500GB disks, and 6 arrays per server.  Technically you
could squeeze more arrays per server as well, as I believe you can find
Mobo's with more than 6 pci slots, and I'm pretty sure they also make
8-port esata/sas cards.

Finally if you need *real time* you could split the arrays, take two
ports to one server, two to the other, and run sun cluster.  When one
server goes down the other should take over instantly.  This is
obviously going to cut your storage in half, but if you need real-time
you're going to have to take a hit somewhere.

This is actually the route I plan on taking eventually.  Anyone else
want to comment on the feasibility of it?

As for cost, I would think if you ebay all of your old hardware, and
wait for some sales on 500GB HDD's, it should more than get you started
on this.

--Tim

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to