It's not exactly a single system

SSG-F618H-OSD288P* 
4U-FatTwin, 4x 1U 72TB per node, Ceph-OSD-Storage Node

This could actually be pretty good, it even has decent CPU power.

I'm not a big fan of blades and blade-like systems - sooner or later a 
backplane will die and you'll need to power off everything, which is a huge 
PITA.
But assuming you get 3 of these it could be pretty cool!
It would be interesting to have a price comparison to a SC216 chassis or 
similiar, I'm afraid it won't be much cheaper.

Jan

> On 03 Sep 2015, at 16:09, Kris Gillespie <kgilles...@bol.com> wrote:
> 
> It's funny cause in my mind, such dense servers seems like a bad idea to
> me for exactly the reason you mention, what if it fails. Losing 400+TB
> of storage is going to have quite some impact, 40G interfaces or not and
> no matter what options you tweak.
> Sure it'll be cost effective per TB, but that isn't the only aspect to
> look at (for production use anyways).
> 
> But I'd also be curious about real world feedback.
> 
> Cheers
> 
> Kris
> 
> The 09/03/2015 16:01, Gurvinder Singh wrote:
>> Hi,
>> 
>> I am wondering if anybody in the community is running ceph cluster with
>> high density machines e.g. Supermicro SYS-F618H-OSD288P (288 TB),
>> Supermicro SSG-6048R-OSD432 (432 TB) or some other high density
>> machines. I am assuming that the installation will be of petabyte scale
>> as you would want to have at least 3 of these boxes.
>> 
>> It would be good to hear their experiences in terms of reliability,
>> performance (specially during node failures). As these machines have
>> 40Gbit network connection it can be ok, but experience from real users
>> would be  great to hear. As these are mentioned in the reference
>> architecture published by red hat and supermicro.
>> 
>> Thanks for your time.
>> _______________________________________________
>> ceph-users mailing list
>> ceph-users@lists.ceph.com
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> 
> De informatie verzonden met dit e-mailbericht is uitsluitend bestemd voor de 
> geadresseerde. Gebruik van deze informatie door anderen dan de geadresseerde 
> is uitdrukkelijk verboden. Indien u dit bericht per vergissing heeft 
> ontvangen, verzoeken wij u ons onmiddelijk hiervan op de hoogte te stellen en 
> het bericht te vernietigen. Openbaarmaking, vermenigvuldiging, verspreiding 
> en/of verstrekking van deze informatie aan derden is niet toegestaan. Bol.com 
> b.v. staat niet in voor de juiste en volledige overbrenging van de inhoud van 
> een verzonden e-mail, noch voor tijdige ontvangst daarvan en aanvaardt geen 
> aansprakelijkheid in dezen.
> The information contained in this communication is confidential and may be 
> legally privileged. It is intended solely for the use of the individual or 
> entity to whom it is addressed and others authorised to receive it. If you 
> are not the intended recipient please notify the sender and destroy this 
> message. Any disclosure, copying, distribution or taking any action in 
> reliance on the contents of this information is strictly prohibited and may 
> be unlawful. Bol.com b.v. is neither liable for the proper and complete 
> transmission of the information contained in this communication nor for delay 
> in its receipt.
> _______________________________________________
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to