[ceph-users] Re: Lock errors in iscsi gateway

2020-04-28 Thread tdados
You can check the lock lists on each rbd and you can try removing the lock but only when the vm is shutdown and rbd is not used rbd lock list pool/volume-id rbd lock rm pool/volume-id "lock_id" client_id This was a bug in luminous upgrade i believe and i found it back in the days from this arti

[ceph-users] Re: is ceph balancer doing anything?

2020-04-28 Thread tdados
Hello Andrei, I have kinda the same problem but because it's production and i don't want to do sudden moves that will have data redistribution and affect clients ( only with change approval and stuff) but from what i played into other test clusters and according to documentation... you need to

[ceph-users] Re: manually configure radosgw

2020-04-28 Thread tdados
CAn you please display your keyring you use in the radosgw containers and also the ceph config? Seems like authentication issue or your containers don't pick up your ceph config? ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an e

[ceph-users] Re: OSD heartbeat failure

2020-06-18 Thread tdados
Hello Neil, You should never never never do a snapshot on a ceph cluster (in a vm perspective as you say). I have my ceph cluster in virtualbox but i only shutdown my cluster with commands like ceph osd noout, norebalance, pause etc. Regarding the osd heartbeat, here is some articles that migh

[ceph-users] Strange behavior for crush buckets of erasure-profile

2019-12-19 Thread tdados
So I wanted to report a crush rule/ec profile strange behaviour regarding radosgw items which i am not sure if it's a bug or it's supposed to work that way. I am trying to implement the below scenario in my home lab: By default there is a "default" erasure-code-profile with the below settings:

[ceph-users] Re: can run more than one rgw multisite realm on one ceph cluster

2019-12-19 Thread tdados
Hello, I managed to do that 3 months ago with 2 realms as i wanted to connect 2 different openstack environments (object store) and use different zones on the same ceph cluster. Now unfortunately i am not able to recreate the scenario :( as the period are getting mixed or i am doing something w

[ceph-users] Re: rbd images inaccessible for a longer period of time

2019-12-19 Thread tdados
I don't have a lot of experience with rbd-nbd but i suppose it works same with rbd. We use xen as hypervisor and sometimes when there is a crash, we need to remove the locks on the volumes when remapping them as these are dead locks. Now removing the locks will sometimes put blacklist on these

[ceph-users] Re: can run more than one rgw multisite realm on one ceph cluster

2019-12-20 Thread tdados
Thanks a lot Casy. Having only one realm as a default does it mean anything in terms of both "can both radosgw operate normally?" And thanks for the "period update --commit --realm-id" command I think that might do the trick. I will test it later today. __

[ceph-users] Re: Servicing multiple OpenStack clusters from the same Ceph cluster

2020-01-29 Thread tdados
Hello, We have recently deployed that and it's working fine. We have deployed different keys for the different openstack clusters ofcourse and they are using the same cinder/nova/glance pools. The only risk is if a client from one openstack cluster creates a volume and the id that will be gene

[ceph-users] Zabbix module failed to send data - SSL support

2020-03-16 Thread tdados
Hello all, We are having an issue with the Ceph Zabbix module and it's failing to send data. The reason is that in our Zabbix infrastructure we use Encryption and agent connection with certificate as well. I see the logs that are failing due to that reason from the zabbix proxy servers. 1329: