Hi,
You have two options to use this chassis , being :
* add a motherboard, that can hold redundant power supplies, and
this will be just a server with a 4U with several disks
* use a server with the LSI card (or other one) and connect this LSI
with a SAS cable to the chassis,
On Wed, Nov 18, 2009 at 3:24 AM, Bruno Sousa wrote:
> Hi Ian,
>
> I use the Supermicro SuperChassis 846E1-R710B, and i added the JBOD kit that
> has :
>
> Power Control Card
>
> SAS 846EL2/EL1 BP External Cascading Cable
>
> SAS 846EL1 BP 1-Port Internal Cascading Cable
>
> I don't do any monitori
Chris Du wrote:
> You can get the E2 version of the chassis that supports multipathing
> but you have to use dual port SAS disks. Or you can use seperate SAS
> hba to connect to seperate jbos chassis and do mirror over 2 chassis.
> The backplane is just a path-through fabric which is very unlikely
And I like to cut of your jib, my young fellow me lad!
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Hi Bruno,
Bruno Sousa wrote:
Hi Ian,
I use the Supermicro SuperChassis 846E1-R710B, and i added the JBOD kit
that has :
* Power Control Card
Sorry to keep bugging you, but which card is this. I like the sound of
your setup.
Cheers,
Ian.
* SAS 846EL2/EL1 BP External
On Tuesday 17 November 2009 22:50, Ian Allison wrote:
> I'm learning as I go here, but as far as I've been able to determine,
> the basic choices for attaching drives seem to be
>
> 1) SATA Port multipliers
> 2) SAS Multilane Enclosures
> 3) SAS Expanders
what about pci(-X) cards?
as stated in:
ht
You can get the E2 version of the chassis that supports multipathing but you
have to use dual port SAS disks. Or you can use seperate SAS hba to connect to
seperate jbos chassis and do mirror over 2 chassis. The backplane is just a
path-through fabric which is very unlikely to die.
Then like ot
Hi Ian,
I use the Supermicro SuperChassis 846E1-R710B, and i added the JBOD kit
that has :
* Power Control Card
* SAS 846EL2/EL1 BP External Cascading Cable
* SAS 846EL1 BP 1-Port Internal Cascading Cable
I don't do any monitoring in the JBOD chassis..
Bruno
Ian Allison wrote:
> H
Hi Bruno,
Bruno Sousa wrote:
Hi,
I currently have a 1U server (Sun X2200) with 2 LSI HBA attached to a
Supermicro JBOD chassis each one with 24 disks , SATA 1TB, and so far so
good..
So i have a 48 TB raw capacity, with a mirror configuration for NFS
usage (Xen VMs) and i feel that for the pric
Hi Richard,
Richard Elling wrote:
Cases like the Supermicro 846E1-R900B have 24 hot swap bays accessible
via a single (4u) LSI SASX36 SAS expander chip, but I'm worried about
controller death and having the backplane as a single point of failure.
There will be dozens of single point failure
Also if you are a startup, there are some ridiculously sweet deals on Sun
hardware through the Sun Startup Essentials program.
http://sun.com/startups
This way you do not need to worry about compatibility and you get all the
Enterprise RAS features at a pretty low price point.
-Angelo
On Nov
Hi,
I currently have a 1U server (Sun X2200) with 2 LSI HBA attached to a
Supermicro JBOD chassis each one with 24 disks , SATA 1TB, and so far so
good..
So i have a 48 TB raw capacity, with a mirror configuration for NFS
usage (Xen VMs) and i feel that for the price i paid i have a very nice
sys
On Nov 17, 2009, at 12:50 PM, Ian Allison wrote:
Hi,
I know (from the zfs-discuss archives and other places [1,2,3,4])
that a lot of people are looking to use zfs as a storage server in
the 10-100TB range.
I'm in the same boat, but I've found that hardware choice is the
biggest issue.
13 matches
Mail list logo