For the record, ``rados df'' will give an object count. Would you mind
to send out your ceph.conf? I cannot imagine what parameter may raise
memory consumption so dramatically, so config at a glance may reveal
some detail. Also core dump should be extremely useful (though it`s
better to pass the flag to Inktank there).

On Mon, Apr 28, 2014 at 1:14 AM, Gandalf Corvotempesta
<gandalf.corvotempe...@gmail.com> wrote:
> I don't know how to count objects but its a test cluster,
> i have not more than 50 small files
>
> 2014-04-27 22:33 GMT+02:00 Andrey Korolyov <and...@xdel.ru>:
>> What # of objects do you have? After all, such large footprint can be
>> just a bug in your build if you do not have ultimate high object
>> count(>~1e8) or any extraordinary configuration parameter.
>>
>> On Mon, Apr 28, 2014 at 12:26 AM, Gandalf Corvotempesta
>> <gandalf.corvotempe...@gmail.com> wrote:
>>> So, are you suggesting to lower the pg count ?
>>> Actually i'm using the suggested number of OSD*100/Replicas
>>> and I have just 2 OSDs per server.
>>>
>>>
>>> 2014-04-24 19:34 GMT+02:00 Andrey Korolyov <and...@xdel.ru>:
>>>> On 04/24/2014 08:14 PM, Gandalf Corvotempesta wrote:
>>>>> During a recovery, I'm hitting oom-killer for ceph-osd because it's
>>>>> using more than 90% of avaialble ram (8GB)
>>>>>
>>>>> How can I decrease the memory footprint during a recovery ?
>>>>
>>>> You can reduce pg count per OSD for example, it scales down well enough.
>>>> OSD memory footprint (during recovery or normal operations) depends of
>>>> number of objects, e.g. commited data and total count of PGs per OSD.
>>>> Because deleting some data is not an option, I may suggest only one
>>>> remaining way :)
>>>>
>>>> I had raised related question a long ago, it was about post-recovery
>>>> memory footprint patterns - OSD shrinks memory usage after successful
>>>> recovery in a relatively long period, up to some days and by couple of
>>>> fast 'leaps'. Heap has nothing to do with this bug I had not profiled
>>>> the daemon itself yet.
>>>>
>>>>> _______________________________________________
>>>>> ceph-users mailing list
>>>>> ceph-users@lists.ceph.com
>>>>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>>>>
>>>>
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to