> Op 23 mei 2017 om 13:01 schreef george.vasilaka...@stfc.ac.uk:
> 
> 
> > Your RGW buckets, how many objects in them, and do they have the index
> > sharded?
> 
> > I know we have some very large & old buckets (10M+ RGW objects in a
> > single bucket), with correspondingly large OMAPs wherever that bucket
> > index is living (sufficently large that trying to list the entire thing
> > online is fruitless). ceph's pgmap status says we have 2G RADOS objects
> > however, and you're only at 61M RADOS objects.
> 
> 
> According to radosgw-admin bucket stats the most populous bucket contains 
> 568101 objects. There is no index sharding. The default.rgw.buckets.data pool 
> contains 4162566 objects, I think striping is done by default for 4MB sizes 
> stripes.
> 

Without index sharding 500k objects in a bucket can already cause larger OMAP 
directories. I'd recommend that you at least start to shard them.

Wido

> Bear in mind RGW is a small use case for us currently.
> Most of the data lives in a pool that is accessed by specialized servers that 
> have plugins based on libradosstriper. That pool stores around 1.8 PB in 
> 32920055 objects.
> 
> One thing of note is that we have this:
> filestore_xattr_use_omap=1
> in our ceph.conf and libradosstriper makes use of xattrs for striping 
> metadata and locking mechanisms.
> 
> This seems to have been removed some time ago but the question is could have 
> any effect? This cluster was built in January and ran Jewel initially.
> 
> I do see the xattrs in XFS but a sampling of an omap dir from an OSD showed 
> like there might be some xattrs in there too.
> 
> I'm going to try restarting an OSD with a big omap and also extracting a copy 
> of one for further inspection.
> It seems to me like they might not be cleaning up old data. I'm fairly 
> certain an active cluster would've compacted enough for 3 month old SSTs to 
> go away.
> 
> _______________________________________________
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to