On 10/09/2018 09:35 AM, Aleksei Zakharov wrote:
> If someone is interested: we've found a workaround in this mailing list: 
> https://www.spinics.net/lists/ceph-users/msg47963.html
> It looks like an old bug.
> We fixed the issue by restarting all ceph-mon services one by one. Mon's 
> store uses ~500Mb now and osd's removed old osd maps:
> ~# find /var/lib/ceph/osd/ceph-224/current/meta/ | wc -l
> 1839
> 
> New osd's have only 1.35GiB used space after first start with no weight.
> 

Since you MONs are rather old I think they are using LevelDB instead of
RocksDB.

It might be worth it to re-deploy the MONs one by one to have them use
RocksDB.

Wido

> 
> 08.10.2018, 22:31, "Aleksei Zakharov" <zakharov....@yandex.ru>:
>> As i can see, all pg's are active+clean:
>>
>> ~# ceph -s
>>   cluster:
>>     id: d168189f-6105-4223-b244-f59842404076
>>     health: HEALTH_WARN
>>             noout,nodeep-scrub flag(s) set
>>             mons 1,2,3,4,5 are using a lot of disk space
>>
>>   services:
>>     mon: 5 daemons, quorum 1,2,3,4,5
>>     mgr: api1(active), standbys: api2
>>     osd: 832 osds: 791 up, 790 in
>>          flags noout,nodeep-scrub
>>
>>   data:
>>     pools: 10 pools, 52336 pgs
>>     objects: 47.78M objects, 238TiB
>>     usage: 854TiB used, 1.28PiB / 2.12PiB avail
>>     pgs: 52336 active+clean
>>
>>   io:
>>     client: 929MiB/s rd, 1.16GiB/s wr, 31.85kop/s rd, 36.19kop/s wr
>>
>> 08.10.2018, 22:11, "Wido den Hollander" <w...@42on.com>:
>>>  On 10/08/2018 05:04 PM, Aleksei Zakharov wrote:
>>>>   Hi all,
>>>>
>>>>   We've upgraded our cluster from jewel to luminous and re-created 
>>>> monitors using rocksdb.
>>>>   Now we see, that mon's are using a lot of disk space and used space only 
>>>> grows. It is about 17GB for now. It was ~13GB when we used leveldb and 
>>>> jewel release.
>>>>
>>>>   When we added new osd's we saw that it downloads from monitors a lot of 
>>>> data. It was ~15GiB few days ago and it is ~18GiB today.
>>>>   One of the osd's we created uses filestore and it looks like old osd 
>>>> maps are not removed:
>>>>
>>>>   ~# find /var/lib/ceph/osd/ceph-224/current/meta/ | wc -l
>>>>   73590
>>>>
>>>>   I've tried to run manual compaction (ceph tell mon.NUM compact) but it 
>>>> doesn't help.
>>>>
>>>>   So, how to stop this growth of data on monitors?
>>>
>>>  What is the status of Ceph? Can you post the output of:
>>>
>>>  $ ceph -s
>>>
>>>  MONs do not trim their database if one or more PGs aren't active+clean.
>>>
>>>  Wido
>>>
>>>>   --
>>>>   Regards,
>>>>   Aleksei Zakharov
>>>>
>>>>   _______________________________________________
>>>>   ceph-users mailing list
>>>>   ceph-users@lists.ceph.com
>>>>   http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>
>> --
>> Regards,
>> Aleksei Zakharov
>>
>> _______________________________________________
>> ceph-users mailing list
>> ceph-users@lists.ceph.com
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> 
> -- 
> Regards,
> Aleksei Zakharov
> 
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to