Hi All,
We've been dealing with what seems to be a pretty annoying bug for a while
now. We are unable to delete a customer's bucket that seems to have an
extremely large number of aborted multipart uploads. I've had $(radosgw-admin
bucket rm --bucket=pusulax --purge-objects) running in a screen se
Hi Liam, All,
We have also run into this bug:
https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/thread/PCYY2MKRPCPIXZLZV5NNBWVHDXKWXVAG/
Like you, we are also running Octopus 15.2.3
Downgrading the RGWs at this point is not ideal, but if a fix isn't found
soon we might have to.
Has a bug
Hi All,
Sorry for the double email, I accidentally sent the previous e-mail with an
accidental KB shortcut before it was finished :)
I'm investigating what appears to be a bug in RGW stats. This is a brand
new cluster running 15.2.3
One of our customers reached out, saying they were hitting thei
Hi All,
I'm investigating what appears to be a bug in RGW stats. This is a brand
new cluster running 15.2.3
One of our customers reached out, saying they were hitting their quota (S3
error: 403 (QuotaExceeded)). The user-wide max_objects quota we set is 50
million objects, so this would be imposs
Hi all!
Reaching out again about this issue since I haven't had much luck. We've
been seeing some strange behavior with our object storage cluster. While
bucket stats (radosgw-admin bucket stats) normally return in a matter of
seconds, we frequently observe it taking almost ten minutes, which is n
e, Oct 29, 2019 at 3:22 AM Florian Haas wrote:
> Hi David,
>
> On 28/10/2019 20:44, David Monschein wrote:
> > Hi All,
> >
> > Running an object storage cluster, originally deployed with Nautilus
> > 14.2.1 and now running 14.2.4.
> >
> > Last week
Hi All,
Running an object storage cluster, originally deployed with Nautilus 14.2.1
and now running 14.2.4.
Last week I was alerted to a new warning from my object storage cluster:
[root@ceph1 ~]# ceph health detail
HEALTH_WARN 1 large omap objects
LARGE_OMAP_OBJECTS 1 large omap objects
1 l