> 
> Hi,
> 
> Yeah, sequentially and waited for finish, and it looks like it is still doing 
> something in the background because now it is 9.5GB even if it tells 
> compaction done.
> I think the ceph tell compact initiated harder so not sure how far it will go 
> down, but looks promising. When I sent the email it was 13, now 9.5.

Online compaction isn’t as fast as offline compaction.  If you set 
mon_compact_on_start = true in ceph.conf the mons will compact more efficiently 
before joining the quorum.  This means of course that they’ll take longer to 
start up and become active.  Arguably this should 

> 1 osd is down long time and but that one I want to remove from the cluster 
> soon, all pgs are active clean.

There’s an issue with at least some versions of Luminous where having down/out 
OSDs confounds comnpaction.  If you don’t end up soon with the mon DB size you 
expect, try removing or replacing that OSD and I’ll bet you have better results.

— aad

> 
> mon stat same yes.
> 
> now I fininshed the email it is 8.7Gb.
> 
> I hope I didn't break anything  and it will delete everything.
> 
> Thank you
> ________________________________________
> From: Anthony D'Atri <anthony.da...@gmail.com>
> Sent: Tuesday, October 20, 2020 9:13 AM
> To: ceph-users@ceph.io
> Cc: Szabo, Istvan (Agoda)
> Subject: Re: [ceph-users] Mon DB compaction MON_DISK_BIG
> 
> Email received from outside the company. If in doubt don't click links nor 
> open attachments!
> ________________________________
> 
> I hope you restarted those mons sequentially, waiting between each for the 
> quorum to return.
> 
> Is there any recovery or pg autoscaling going on?
> 
> Are all OSDs up/in, ie. are the three numbers returned by `ceph osd stat` the 
> same?
> 
> — aad
> 
>> On Oct 19, 2020, at 7:05 PM, Szabo, Istvan (Agoda) <istvan.sz...@agoda.com> 
>> wrote:
>> 
>> Hi,
>> 
>> 
>> I've received a warning today morning:
>> 
>> 
>> HEALTH_WARN mons monserver-2c01,monserver-2c02,monserver-2c03 are using a 
>> lot of disk space
>> MON_DISK_BIG mons monserver-2c01,monserver-2c02,monserver-2c03 are using a 
>> lot of disk space
>>   mon.monserver-2c01 is 15.3GiB >= mon_data_size_warn (15GiB)
>>   mon.monserver-2c02 is 15.3GiB >= mon_data_size_warn (15GiB)
>>   mon.monserver-2c03 is 15.3GiB >= mon_data_size_warn (15GiB)
>> 
>> It hits the 15GB so I've restarted all the 3 mons, it triggered compaction.
>> 
>> I've also ran this command:
>> 
>> ceph tell mon.`hostname -s` compact on the first node, but it wents down 
>> only to 13GB.
>> 
>> 
>> du -sch /var/lib/ceph/mon/ceph-monserver-2c01/store.db/
>> 13G     /var/lib/ceph/mon/ceph-monserver-2c01/store.db/
>> 13G     total
>> 
>> 
>> Anything else I can do to reduce it?
>> 
>> 
>> Luminous 12.2.8 is the version.
>> 
>> 
>> Thank you in advance.
>> 
>> 
>> ________________________________
>> This message is confidential and is for the sole use of the intended 
>> recipient(s). It may also be privileged or otherwise protected by copyright 
>> or other legal rules. If you have received it by mistake please let us know 
>> by reply email and delete it from your system. It is prohibited to copy this 
>> message or disclose its content to anyone. Any confidentiality or privilege 
>> is not waived or lost by any mistaken delivery or unauthorized disclosure of 
>> the message. All messages sent to and from Agoda may be monitored to ensure 
>> compliance with company policies, to protect the company's interests and to 
>> remove potential malware. Electronic messages may be intercepted, amended, 
>> lost or deleted, or contain viruses.
>> _______________________________________________
>> ceph-users mailing list -- ceph-users@ceph.io
>> To unsubscribe send an email to ceph-users-le...@ceph.io
> 
> _______________________________________________
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to