If I currently understand idea, when this 1 SSD will fail whole node with
that SSD will fail. Correct?
What scenario for node recovery in this case?
Playing with "ceph-osd --flush-journal" and "ceph-osd --mkjournal" for each
osd?


On Tue, May 7, 2013 at 4:17 PM, Mark Nelson <mark.nel...@inktank.com> wrote:

> On 05/07/2013 06:50 AM, Barry O'Rourke wrote:
>
>> Hi,
>>
>> I'm looking to purchase a production cluster of 3 Dell Poweredge R515's
>> which I intend to run in 3 x replication. I've opted for the following
>> configuration;
>>
>> 2 x 6 core processors
>> 32Gb RAM
>> H700 controller (1Gb cache)
>> 2 x SAS OS disks (in RAID1)
>> 2 x 1Gb ethernet (bonded for cluster network)
>> 2 x 1Gb ethernet (bonded for client network)
>>
>> and either 4 x 2Tb nearline SAS OSDs or 8 x 1Tb nearline SAS OSDs.
>>
>
> Hi Barry,
>
> With so few disks and the inability to do 10GbE, you may want to consider
> doing something like 5-6 R410s or R415s and just using the on-board
> controller with a couple of SATA disks and 1 SSD for the journal.  That
> should give you better aggregate performance since in your case you can't
> use 10GbE.  It will also spread your OSDs across more hosts for better
> redundancy and may not cost that much more per GB since you won't need to
> use the H700 card if you are using an SSD for journals.  It's not as dense
> as R515s or R720XDs can be when fully loaded, but for small clusters with
> few disks I think it's a good trade-off to get the added redundancy and
> avoid expander/controller complications.
>
>
>
>> At the moment I'm undecided on the OSDs, although I'm swaying towards
>> the second option at the moment as it would give me more flexibility and
>> the option of using some of the disks as journals.
>>
>> I'm intending to use this cluster to host the images for ~100 virtual
>> machines, which will run on different hardware most likely be managed by
>> OpenNebula.
>>
>> I'd be interested to hear from anyone running a similar configuration
>> with a similar use case, especially people who have spent some time
>> benchmarking a similar configuration and still have a copy of the results.
>>
>> I'd also welcome any comments or critique on the above specification.
>> Purchases have to be made via Dell and 10Gb ethernet is out of the
>> question at the moment.
>>
>> Cheers,
>>
>> Barry
>>
>>
>>
> ______________________________**_________________
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/**listinfo.cgi/ceph-users-ceph.**com<http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com>
>



-- 
Igor Laskovy
facebook.com/igor.laskovy
studiogrizzly.com
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to