Christian Balzer wrote:
> Simply put, a RAID1 of SSDs will require you to get twice as many SSDs as
> otherwise needed. And most people don't want to spend that money.
> In addition to that DC level SSDs tend to very reliable and your cluster
> will have to be able to withstand losses like this an
On Sun, 12 Apr 2015 18:03:52 +0200 Francois Lafont wrote:
> Hi,
>
> Christian Balzer wrote:
>
> >> I'm not sure to well understand: the model that I indicated in the
> >> link above (page 2, model SSG-6027R-OSD040H in the table) already
> >> have hotswap bays in the back, for OS drives.
> >>
> >
Chris Kitzmiller wrote:
> Just as a single data point I can speak to my own nodes. I'm using SM 847A
> [1] chassis. They're 4U, 36 x 3.5" hot swap bays with 2 internal 2.5" bays.
> So:
>
> 30 x 7200 RPM SATA
> 6 x SSD Journals
> 2 x SSD OS / Mon
> 2 x E5-2620 2.0GHz
>
>
Hi,
Christian Balzer wrote:
>> I'm not sure to well understand: the model that I indicated in the link
>> above (page 2, model SSG-6027R-OSD040H in the table) already have hotswap
>> bays in the back, for OS drives.
>>
> Yes, but that model is pre-configured:
> 2x 2.5" 400GB SSDs, 10x 3.5" 4TB S
On Wed, 08 Apr 2015 14:59:21 +0200 Francois Lafont wrote:
> Hi,
>
> Sorry in advance for this thread not directly linked to Ceph. ;)
> We are thinking about buying servers to build a ceph cluster and we
> would like to have, if possible, a *approximative* power usage
> estimation of these servers
Hi,
Sorry in advance for this thread not directly linked to Ceph. ;)
We are thinking about buying servers to build a ceph cluster and we
would like to have, if possible, a *approximative* power usage
estimation of these servers (this parameter could be important in
your choice):
1. the 12xbays su