Chris Du wrote:
> You can get the E2 version of the chassis that supports multipathing
> but you have to use dual port SAS disks. Or you can use seperate SAS
> hba to connect to seperate jbos chassis and do mirror over 2 chassis.
> The backplane is just a path-through fabric which is very unlikely
Cascading Cable
* SAS 846EL1 BP 1-Port Internal Cascading Cable
I don't do any monitoring in the JBOD chassis..
Bruno
Ian Allison wrote:
Hi Bruno,
Bruno Sousa wrote:
Hi,
I currently have a 1U server (Sun X2200) with 2 LSI HBA attached to a
Supermicro JBOD chassis each one with 24
Hi Bruno,
Bruno Sousa wrote:
Hi,
I currently have a 1U server (Sun X2200) with 2 LSI HBA attached to a
Supermicro JBOD chassis each one with 24 disks , SATA 1TB, and so far so
good..
So i have a 48 TB raw capacity, with a mirror configuration for NFS
usage (Xen VMs) and i feel that for the pric
Hi Richard,
Richard Elling wrote:
Cases like the Supermicro 846E1-R900B have 24 hot swap bays accessible
via a single (4u) LSI SASX36 SAS expander chip, but I'm worried about
controller death and having the backplane as a single point of failure.
There will be dozens of single point failure
Hi,
I know (from the zfs-discuss archives and other places [1,2,3,4]) that a
lot of people are looking to use zfs as a storage server in the 10-100TB
range.
I'm in the same boat, but I've found that hardware choice is the biggest
issue. I'm struggling to find something which will work nicely
Hi,
I've been looking at a raidz using opensolaris snv_111b and I've come
across something I don't quite understand. I have 5 disks (fixed size
disk images defined in virtualbox) in a raidz configuration, with 1 disk
marked as a spare. The disks are 100m in size and I wanted simulate data
cor