Hi all,
I'm having difficulties removing a CephFS volume that I set up for
testing. I've been through this with RBDs, so I do know about
`mon_allow_pool_delete`. However, it doesn't help in this case.
It is a cluster with 3 monitors. You can find a console log of me
verifying that `mon_allow
On 5/17/23 18:07, Stefan Kooman wrote:
On 5/17/23 17:29, Conrad Hoffmann wrote:
Hi all,
I'm having difficulties removing a CephFS volume that I set up for
testing. I've been through this with RBDs, so I do know about
`mon_allow_pool_delete`. However, it doesn't help in this
Hi Patrick,
On 5/22/23 22:00, Patrick Donnelly wrote:
Hi Conrad,
On Wed, May 17, 2023 at 2:41 PM Conrad Hoffmann wrote:
On 5/17/23 18:07, Stefan Kooman wrote:
On 5/17/23 17:29, Conrad Hoffmann wrote:
Hi all,
I'm having difficulties removing a CephFS volume that I set up for
testing.
Hello,
I am trying to get a better understanding of how Ceph handles changes to
CRUSH rules, such as changing the failure domain. I performed this
(maybe somewhat academic, sorry) excercise, and would love to verify my
conclusions (or get a good explanation of my observations):
Starting poin