> On 14 Sep 2016, at 23:07, Gregory Farnum <gfar...@redhat.com> wrote:
> 
> On Wed, Sep 14, 2016 at 7:19 AM, Dan Van Der Ster
> <daniel.vanders...@cern.ch> wrote:
>> Indeed, seems to be trimmed by osd_target_transaction_size (default 30) per 
>> new osdmap.
>> Thanks a lot for your help!
> 
> IIRC we had an entire separate issue before adding that field, where
> cleaning up from bad situations like that would result in the OSD
> killing itself as removing 2k maps exceeded the heartbeat timeouts. ;)
> Thus the limit.

Thanks Greg. FTR, I did some experimenting and found that setting 
osd_target_transaction_size = 1000 is a very bad idea (tried on one osd... 
FileStore merging of the meta subdirs lead to a slow/down osd). But setting it 
to ~60 was OK.

I cleaned up 90TB of old osdmaps today, generating new maps in a loop by doing:

   watch -n10 ceph osd pool set data min_size 2

Anything more aggressive than that was disruptive on our cluster.

Cheers, Dan


_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to