Happens to me too, on gmail. I'm on half a dozen other mailman lists with
no issues at all. I've escalate this problem to the ceph mailing list
maintainer and they said its an issue with their provider, but this was
probably a year ago.
On Tue, Oct 9, 2018 at 7:04 AM Elias Abacioglu <
elias.abacio
Has anyone automated the ability to generate S3 keys for OpenStack users in
Ceph? Right now we take in a users request manually (Hey we need an S3 API
key for our OpenStack project 'X', can you help?). We as cloud/ceph admins
just use radosgw-admin to create them an access/secret key pair for their
Hi there,
We have an old cluster that was built on Giant that we have maintained and
upgraded over time and are now running Mimic 13.2.5. The other day we
received a HEALTH_WARN about 1 large omap object in the pool '.usage' which
is our usage_log_pool defined in our radosgw zone.
I am trying to
Thanks for chiming in Konstantin!
Wouldn't setting this value to 0 disable the sharding?
Reference: http://docs.ceph.com/docs/mimic/radosgw/config-ref/
rgw override bucket index max shards
Description:Represents the number of shards for the bucket index
object, a value of zero indicates there is
odley wrote:
>
>
> On 5/24/19 1:15 PM, shubjero wrote:
> > Thanks for chiming in Konstantin!
> >
> > Wouldn't setting this value to 0 disable the sharding?
> >
> > Reference: http://docs.ceph.com/docs/mimic/radosgw/config-ref/
> >
> > rgw override
Good day,
We have a sizeable ceph deployment and use object-storage heavily. We
also integrate our object-storage with OpenStack but sometimes we are
required to create S3 keys for some of our users (aws-cli, java apps
that speak s3, etc). I was wondering if it is possible to see an audit
trail of
Florian,
Thanks for posting about this issue. This is something that we have
been experiencing (stale exclusive locks) with our OpenStack and Ceph
cloud more frequently as our datacentre has had some reliability
issues recently with power and cooling causing several unexpected
shutdowns.
At this