>
> Based on our observation of the impact of the balancer on the
> performance of the entire cluster, we have drawn conclusions that we
> would like to discuss with you.
>
> - A newly created pool should be balanced before being handed over
> to the user. This, I believe, is quite evident.
>
Hi Ceph users
We are using Ceph Pacific (16) in this specific deployment.
In our use case we do not want our users to be able to generate signature v4
URLs because they bypass the policies that we set on buckets (e.g IP
restrictions).
Currently we have a sidecar reverse proxy running that filte
Hi Xiubo,
I will update the case. I'm afraid this will have to wait a little bit though.
I'm too occupied for a while and also don't have a test cluster that would help
speed things up. I will update you, please keep the tracker open.
Best regards,
=
Frank Schilder
AIT Risø Camp
I was able to (almost) reproduce the issue in a (Pacific) test
cluster. I rebuilt the monmap from the OSDs, brought everything back
up, started the mds recovery like described in [1]:
ceph fs new--force --recover
Then I added two mds daemons which went into standby:
---snip---
Started C
Some more information on the damaged CephFS, apparently the journal is
damaged:
---snip---
# cephfs-journal-tool --rank=storage:0 --journal=mdlog journal inspect
2023-12-08T15:35:22.922+0200 7f834d0320c0 -1 Missing object 200.000527c4
2023-12-08T15:35:22.938+0200 7f834d0320c0 -1 Bad entry sta
On Fri, Dec 08, 2023 at 10:41:59AM +0100, marc@singer.services wrote:
> Hi Ceph users
>
> We are using Ceph Pacific (16) in this specific deployment.
>
> In our use case we do not want our users to be able to generate signature v4
> URLs because they bypass the policies that we set on buckets (e