Hello Ondrej,
Does renaming the bucket help? I see the command [1] takes the UID.
How about you take the maintenance window from your both of the users and try
renaming usernames or bucket names to see if either helps?
[1] https://www.ibm.com/docs/en/storage-ceph/7?topic=management-renaming-buck
Agreed, though today either limits one’s choices of manufacturer.
> There are models to fit that, but if you're also considering new drives,
> you can get further density in E1/E3
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an
On Fri, Jan 12, 2024 at 02:32:12PM +, Drew Weaver wrote:
> Hello,
>
> So we were going to replace a Ceph cluster with some hardware we had
> laying around using SATA HBAs but I was told that the only right way
> to build Ceph in 2023 is with direct attach NVMe.
>
> Does anyone have any recomm
I could be wrong however as far as I can see you have 9 chunks which
requires 9 failure domains.
Your failure domain is set to datacenter which you only have 3 of. So that
won't work.
You need to set your failure domain to host and then create a crush rule to
choose a DC and choose 3 hosts within