Indeed, seems to be trimmed by osd_target_transaction_size (default 30) per new 
osdmap.
Thanks a lot for your help!

-- Dan


> On 14 Sep 2016, at 15:49, Steve Taylor <steve.tay...@storagecraft.com> wrote:
> 
> I think it's a maximum of 30 maps per osdmap update. So if you've got huge 
> caches like we had, then you might have to generate a lot of updates to get 
> things squared away. That's what I did, and it worked really well.
> 
> <image0a6b59.JPG>     Steve Taylor | Senior Software Engineer | StorageCraft 
> Technology Corporation
> 380 Data Drive Suite 300 | Draper | Utah | 84020
> Office: 801.871.2799 |
> If you are not the intended recipient of this message or received it 
> erroneously, please notify the sender and delete it, together with any 
> attachments, and be advised that any dissemination or copying of this message 
> is prohibited.
> ________________________________________
> From: Dan Van Der Ster [daniel.vanders...@cern.ch]
> Sent: Wednesday, September 14, 2016 7:21 AM
> To: Steve Taylor
> Cc: ceph-us...@ceph.com
> Subject: Re: Cleanup old osdmaps after #13990 fix applied
> 
> Hi Steve,
> 
> Thanks, that sounds promising.
> Are only a limited number of maps trimmed for each new osdmap generated? If 
> so, I'll generate a bit of churn to get these cleaned up.
> 
> -- Dan
> 
> 
> > On 14 Sep 2016, at 15:08, Steve Taylor <steve.tay...@storagecraft.com> 
> > wrote:
> >
> > http://tracker.ceph.com/issues/13990 was created by a colleague of mine 
> > from an issue that was affecting us in production. When 0.94.8 was released 
> > with the fix, I immediately deployed a test cluster on 0.94.7, reproduced 
> > this issue, upgraded to 0.94.8, and tested the fix. It worked beautifully.
> >
> > I suspect the issue you're seeing is that the clean-up only occurs when new 
> > osdmaps are generated, so as long as nothing is changing you'll continue to 
> > see lots of stale maps cached. We delete RBD snapshots all the time in our 
> > production use case, which updates the osdmap, so I did that in my test 
> > cluster and watched the map cache on one of the OSDs. Sure enough, after a 
> > while the cache was pruned down to the expected size.
> >
> > Over time I imagine you'll see things settle, but it may take a while if 
> > you don't update the osdmap frequently.
> >
> > <image9cbe59.JPG>     Steve Taylor | Senior Software Engineer | 
> > StorageCraft Technology Corporation
> > 380 Data Drive Suite 300 | Draper | Utah | 84020
> > Office: 801.871.2799 |
> > If you are not the intended recipient of this message or received it 
> > erroneously, please notify the sender and delete it, together with any 
> > attachments, and be advised that any dissemination or copying of this 
> > message is prohibited.
> > ________________________________________
> > From: ceph-users [ceph-users-boun...@lists.ceph.com] on behalf of Dan Van 
> > Der Ster [daniel.vanders...@cern.ch]
> > Sent: Wednesday, September 14, 2016 3:45 AM
> > To: ceph-us...@ceph.com
> > Subject: [ceph-users] Cleanup old osdmaps after #13990 fix applied
> >
> > Hi,
> >
> > We've just upgraded to 0.94.9, so I believe this issue is fixed:
> >
> >    http://tracker.ceph.com/issues/13990
> >
> > AFAICT "resolved" means the number of osdmaps saved on each OSD will not 
> > grow unboundedly anymore.
> >
> > However, we have many OSDs with loads of old osdmaps, e.g.:
> >
> > # pwd
> > /var/lib/ceph/osd/ceph-257/current/meta
> > # find . -name 'osdmap*' | wc -l
> > 112810
> >
> > (And our maps are ~1MB, so this is >100GB per OSD).
> >
> > Is there a solution to remove these old maps?
> >
> > Cheers,
> > Dan
> > _______________________________________________
> > ceph-users mailing list
> > ceph-users@lists.ceph.com
> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to