Listing will always take forever when using a high shard number, AFAIK.
That's the tradeoff for sharding.  Are those 2B objects in one bucket?
How's your read and write performance compared to a bucket with a lower
number (thousands) of objects, with that shard number?

On Tue, May 1, 2018 at 7:59 AM, Katie Holly <8ld3j...@meo.ws> wrote:

> One of our radosgw buckets has grown a lot in size, `rgw bucket stats
> --bucket $bucketname` reports a total of 2,110,269,538 objects with the
> bucket index sharded across 32768 shards, listing the root context of the
> bucket with `s3 ls s3://$bucketname` takes more than an hour which is the
> hard limit to first-byte on our nginx reverse proxy and the aws-cli times
> out long before that timeout limit is hit.
>
> The software we use supports sharding the data across multiple s3 buckets
> but before I go ahead and enable this, has anyone ever had that many
> objects in a single RGW bucket and can let me know how you solved the
> problem of RGW taking a long time to read the full index?
>
> --
> Best regards
>
> Katie Holly
> _______________________________________________
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to