You can check the lock lists on each rbd and you can try removing the lock but
only when the vm is shutdown and rbd is not used
rbd lock list pool/volume-id
rbd lock rm pool/volume-id "lock_id" client_id
This was a bug in luminous upgrade i believe and i found it back in the days
from this arti
Hello Andrei,
I have kinda the same problem but because it's production and i don't want to
do sudden moves that will have data redistribution and affect clients ( only
with change approval and stuff) but from what i played into other test
clusters and according to documentation... you need to
CAn you please display your keyring you use in the radosgw containers and also
the ceph config? Seems like authentication issue or your containers don't pick
up your ceph config?
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an e
Hello Neil,
You should never never never do a snapshot on a ceph cluster (in a vm
perspective as you say). I have my ceph cluster in virtualbox but i only
shutdown my cluster with commands like ceph osd noout, norebalance, pause etc.
Regarding the osd heartbeat, here is some articles that migh
So I wanted to report a crush rule/ec profile strange behaviour regarding
radosgw items which i am not sure if it's a bug or it's supposed to work that
way.
I am trying to implement the below scenario in my home lab:
By default there is a "default" erasure-code-profile with the below settings:
Hello,
I managed to do that 3 months ago with 2 realms as i wanted to connect 2
different openstack environments (object store) and use different zones on the
same ceph cluster.
Now unfortunately i am not able to recreate the scenario :( as the period are
getting mixed or i am doing something w
I don't have a lot of experience with rbd-nbd but i suppose it works same with
rbd. We use xen as hypervisor and sometimes when there is a crash, we need to
remove the locks on the volumes when remapping them as these are dead locks.
Now removing the locks will sometimes put blacklist on these
Thanks a lot Casy.
Having only one realm as a default does it mean anything in terms of both "can
both radosgw operate normally?"
And thanks for the "period update --commit --realm-id" command
I think that might do the trick. I will test it later today.
__
Hello,
We have recently deployed that and it's working fine. We have deployed
different keys for the different openstack clusters ofcourse and they are using
the same cinder/nova/glance pools.
The only risk is if a client from one openstack cluster creates a volume and
the id that will be gene
Hello all,
We are having an issue with the Ceph Zabbix module and it's failing to send
data. The reason is that in our Zabbix infrastructure we use Encryption and
agent connection with certificate as well. I see the logs that are failing due
to that reason from the zabbix proxy servers.
1329:
10 matches
Mail list logo