Those Fat twins are not blades in the classical sense, they are what are often 
referred to as un-blades.

 

They only share power, ie about 4-6 pins which are connected by solid bits of 
copper to the PSU’s. I can’t see anyway of this going wrong. If you take out 
all the sleds you are just left with an empty box. If you took the fans out the 
back, you could probably even climb through it.

 

However since they share power and cooling they work out cheaper to buy and run 
compared to standard servers. As long as you don’t mind pulling the whole sled 
out to swap a disk, then I think you would be hard pressed to find a solution 
which matches it in terms of price/density.

 

From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Jan 
Schermer
Sent: 03 September 2015 15:53
To: Paul Evans <p...@daystrom.com>
Cc: ceph-users@lists.ceph.com
Subject: Re: [ceph-users] high density machines

 

 

On 03 Sep 2015, at 16:49, Paul Evans <p...@daystrom.com 
<mailto:p...@daystrom.com> > wrote:

 

Echoing what Jan said, the 4U Fat Twin is the better choice of the two options, 
as it is very difficult to get long-term reliable and efficient operation of 
many OSDs when they are serviced by just one or two CPUs.  

I don’t believe the FatTwin design has much of a backplane, primarily sharing 
power and cooling. That said: the cost savings would need to be solid to choose 
the FatTwin over 1U boxes, especially as (personally) I dislike lots of 
front-side cabling in the rack.  

 

I never used SuperMicro blades, but with Dell blades there's a single 
"backplane" board to which the blades plug-in for power and IO distribution. We 
had it go bad in a way where the blades would work until removed, and wouldn't 
power on once plugged-in again. Restart of the chassis didn't help and we had 
to change the backplane.

I can't imagine SuperMicro would be much different, there are some components 
that just can't be changed while the chassis is in operation.

 





-- 
Paul Evans




 

On Sep 3, 2015, at 7:01 AM, Gurvinder Singh <gurvindersinghdah...@gmail.com 
<mailto:gurvindersinghdah...@gmail.com> > wrote:

 

Hi,

I am wondering if anybody in the community is running ceph cluster with
high density machines e.g. Supermicro SYS-F618H-OSD288P (288 TB),
Supermicro SSG-6048R-OSD432 (432 TB) or some other high density
machines. I am assuming that the installation will be of petabyte scale
as you would want to have at least 3 of these boxes.

It would be good to hear their experiences in terms of reliability,
performance (specially during node failures). As these machines have
40Gbit network connection it can be ok, but experience from real users
would be  great to hear. As these are mentioned in the reference
architecture published by red hat and supermicro.

Thanks for your time.
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com <mailto:ceph-users@lists.ceph.com> 
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

 

_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com <mailto:ceph-users@lists.ceph.com> 
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

 




_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to