Dear all,
to give some closure here:
I was missing the "admin" service in my "rgw" deployment. I had "s3" and
"s3website" explicitly configured in my cephadm deployment which turned out
to be the mistake. Once the admin service was deployed, the dashboard
created the bucket and necessary entries
Hi Paul,
Could you create a ceph tracker (tracker.ceph.com) and list out things
that are suboptimal according to your investigation?
We'd like to hear more on this.
Alternatively, you could list the issues with mds here.
Thanks,
Milind
On Sun, Jan 7, 2024 at 4:37 PM Paul Mezzanini wrote:
>
> We
Hello dear fellow ceph users,
it seems that for some months all current ceph releases (16.x, 17.x,
18.x) are having a bug in ceph-volume that causes disk
activation to fail with the error "IndexError: list index out of range"
(details below, [0]).
It also seems there is already a fix for it ava
We've seen it use as much as 1.6t of ram/swap.Swap makes it slow, but a
slow recovery is better than no recovery. My coworker looked into it at the
source code level and while it is doing some things suboptimal that's how it's
currently written.
The MDS code needs some real love if ceph
Hi Paul,
your suggestion was correct. The mds went through the replay state and was
a few minutes in the active state. But then it gets killed because of too
high memory consumption.
> @mds.cephfs.storage01.pgperp.service: Main process exited, code=exited,
> status=137/n/a
How could I raise the