@David Turner
Did your bucket delete ever finish? I am up to 35M incomplete uploads,
and I doubt that I actually had that many upload attempts. I could be
wrong though.
Is there a way to force bucket deletion, even at the cost of not
cleaning up space?
On Tue, Jun 25, 2019 at 12:29 PM J. Eric Ivan
ploading to it constantly with a very bad network connection.
>
> On Fri, Jun 21, 2019 at 1:13 PM Sergei Genchev wrote:
>>
>> Hello,
>> Trying to delete bucket using radosgw-admin, and failing. Bucket has
>> 50K objects but all of them are large. This is what I get:
Hello,
Trying to delete bucket using radosgw-admin, and failing. Bucket has
50K objects but all of them are large. This is what I get:
$ radosgw-admin bucket rm --bucket=di-omt-mapupdate --purge-objects --bypass-gc
2019-06-21 17:09:12.424 7f53f621f700 0 WARNING : aborted 1000
incomplete multipart
lude many other
> things, like pglog, you it's important to known do you cluster is dong
> recover?
>
> Sergei Genchev 于2019年6月8日周六 上午5:35写道:
> >
> > Hi,
> > My OSD processes are constantly getting killed by OOM killer. My
> > cluster has 5 servers, each with
Hi,
My OSD processes are constantly getting killed by OOM killer. My
cluster has 5 servers, each with 18 spinning disks, running 18 OSD
daemons in 48GB of memory.
I was trying to limit OSD cache, according to
http://docs.ceph.com/docs/mimic/rados/configuration/bluestore-config-ref/
[osd]
bluest
ror: command returned non-zero exit status: 5
This is how destroy failed before I started deleting volumes.
On Thu, Apr 18, 2019 at 2:26 PM Alfredo Deza wrote:
> On Thu, Apr 18, 2019 at 3:01 PM Sergei Genchev wrote:
> >
> > Thank you Alfredo
> > I did not have any reasons to keep
ror: osvg-sdd-db/2ukzAx-g9pZ-IyxU-Sp9h-fHv2-INNY-1vTpvz:
probing initialization failed: No such file or directory
--> RuntimeError: command returned non-zero exit status: 1
On Thu, Apr 18, 2019 at 10:10 AM Alfredo Deza wrote:
> On Thu, Apr 18, 2019 at 10:55 AM Sergei Genchev
> wrote:
Hello,
I have a server with 18 disks, and 17 OSD daemons configured. One of the
OSD daemons failed to deploy with ceph-deploy. The reason for failing is
unimportant at this point, I believe it was race condition, as I was
running ceph-deploy inside while loop for all disks in this server.
Now I
On Tue, Apr 16, 2019 at 1:28 PM Paul Emmerich wrote:
>
> I think the warning is triggered by the mgr daemon and not the mon,
> try setting it there
>
Thank you Paul.
How do I set it in the mgr daemon?
I tried:
ceph tell mon.* injectargs '--mgr_pg_warn_max_object_skew 0'
ceph tell mgr.* inject
Hi,
I am getting a health warning about many more objects for PG than
average. Seems to be common with RadosGW, where pools other than data
contain very small number of objects.
ceph@ola-s3-stg:/etc/ceph$ ceph health detail
HEALTH_WARN 1 pools have many more objects per pg than average
MANY_OBJEC
10 matches
Mail list logo