Oh the Machines are 128gb :)

From:  Corin Langosch <corin.lango...@netskin.com>
Date:  Friday, 30 August 2013 2:08 PM
To:  Wido den Hollander <w...@42on.com>
Cc:  <ceph-users@lists.ceph.com>
Subject:  Re: [ceph-users] OSD to OSD Communication

    
 
 Am 30.08.2013 20:33, schrieb Wido den Hollander:
 
 
> On 08/30/2013 08:19 PM, Geraint Jones wrote:
>  
>> Hi Guys 
>>  
>>  We are using Ceph in production backing an LXC cluster. The setup is : 2
>>  x Servers, 24 x 3TB Disks each in groups of 3 as RAID0. SSD for
>>  journals. Bonded 1gbit ethernet (2gbit total).
>>  
>>  
>  
>  I think you sized your machines too big. I'd say go for 6 machines with 8
> disks each without RAID-0. Let Ceph do it's job and avoid RAID.
>  
>  
 
 I think that would be ok if ceph wouldn't consume that much memory. On a
relatively small cluster with currently 16 OSDs and 8192 PGs (2 pools with
4096 pgs each) each osd consumes around 2-4 GB ram. So with 24 disks the
machine would need at least 96 GB ram, just for the osds. With some memory
left for the page cache etc. 128 GB would be the minimum.
 
 Anyway, I think this is a bug and already filed a report for it here:
http://tracker.ceph.com/issues/5700 To be honest I'm a little disappointed
by the last answer there, because 8192 isn't a lot of pgs? The docs say
50-100 pgs per osds per pool. So 2 pools with 4096 pgs each just allows for
40 - 80 osds - which isn't really that much? There's also a big discrepancy
with the docs, as they state an osd would consume 200 - 500 mb (
http://ceph.com/docs/next/install/hardware-recommendations/). I know David
is still waiting for a detailed debug log from me, which I'll provide within
a short while. But if ceph's memory requirements are really that high,
fixing the docs to allow for proper resource planing is really a must.
 
 I'm curious what memory usage other users are experiencing?
 
 Cheers,
 Corin
 
 
_______________________________________________ ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to