Would the system be able to halt if something was unplugged/some massive failure happened?
That way if something got tripped, I could fix it before any corruption or issue occured. That would be my safety net, I suppose. On 3/20/07, Sanjeev Bagewadi <[EMAIL PROTECTED]> wrote:
Mike, We have used 4 disks (2X80GB disks and 2X250GB disks) on USB and things worked well. Hot plugging the disks was not all that smooth for us. Other than that we had no issues using the disks. We used this setup for demos at the FOSS 2007 conference at Bangalore and that went through several destructive tests for a period of 3 days and the setup survied well. (It never let us down in front of the customers :-) The disks we used had individual enclosures, which was a bit clunky. It would be nice to have a single enclosure for all the disks (which can power the disks). Thanks and regards, Sanjeev. Bev Crair wrote: > Mike, > Take a look at > http://video.google.com/videoplay?docid=8100808442979626078&q=CSI%3Amunich > > > Granted, this was for demo purposes, but the team in Munich is clearly > leveraging USB sticks for their purposes. > HTH, > Bev. > > mike wrote: > >> I still haven't got any "warm and fuzzy" responses yet solidifying ZFS >> in combination with Firewire or USB enclosures. >> >> I am looking for 4-10 drive enclosures for quiet SOHO desktop-ish use. >> I am trying to confirm that OpenSolaris+ZFS would be stable with this, >> if exported out as JBOD and allow ZFS to manage each disk >> individually. >> >> Enclosure idea (choose one): >> http://fwdepot.com/thestore/default.php/cPath/1_88 >> Would be looking to use 750GB SATA2 drives, or IDE is fine too. >> >> Would anyone be willing to speak up and give me some faith in this >> before I invest money into a solution that won't work? I don't intend >> on hot-plugging any of these devices, just using Firewire (or USB, if >> I can find a big enclosure) since it is a cheap and reliable >> interconnect (eSATA seems to be a little too new for use with >> OpenSolaris unless I have some PCI-X slots) >> >> Any help is appreciated. I'd most likely use a Shuttle XPC as the >> "head unit" for all of this - it is quiet and small. (I'm looking to >> downsize my beefy huge noisy heavy tower with limited space >> availability) - obviously bandwidth on the bus would be limited the >> more drives sharing the same cable. That would be my only design >> constraint. >> >> Thanks a ton. Again, any input (good, bad, ugly, personal experiences >> or opinions) is appreciated A LOT! >> >> - mike >> _______________________________________________ >> zfs-discuss mailing list >> zfs-discuss@opensolaris.org >> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss > > _______________________________________________ > zfs-discuss mailing list > zfs-discuss@opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss -- Solaris Revenue Products Engineering, India Engineering Center, Sun Microsystems India Pvt Ltd. Tel: x27521 +91 80 669 27521
_______________________________________________ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss