Thanks☺

We are using hammer 0.94.5, Which commit is supposed to fix this bug? Thank you.

发件人: David Turner [mailto:drakonst...@gmail.com]
发送时间: 2017年4月25日 20:17
收件人: 许雪寒; ceph-users@lists.ceph.com
主题: Re: [ceph-users] Large META directory within each OSD's directory


Which version of Ceph are you running? My guess is Hammer pre-0.94.9. There is 
an osdmap cache bug that was introduced with Hammer that was fixed in 0.94.9. 
The work around is to restart all of the OSDs in your cluster. After restarting 
the OSDs, the cluster will start to clean up osdmaps 20 at a time each time you 
generate a new map. If you don't generate maps often, then you can write a loop 
that does something like setting the min size for a pool to the same thing 
every 10-20 seconds until you catch up. (Note, that doesn't change any 
settings, but it does update the map).

On Tue, Apr 25, 2017, 4:45 AM 许雪寒 <xuxue...@360.cn<mailto:xuxue...@360.cn>> 
wrote:
Hi, everyone.

Recently, in one of our clusters, we found that the “META” directory in each 
OSD’s working directory is getting extremely large, about 17GB each. Why hasn’t 
the OSD cleared those old osdmaps? How should I deal with this problem?

Thank you☺
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com<mailto:ceph-users@lists.ceph.com>
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to